Test Report: Docker_Linux_crio 21701

                    
                      39a663ec30ddfd049b0783b78fdfbb9970ee2a8a:2025-10-06:41791
                    
                

Test fail (56/166)

Order failed test Duration
27 TestAddons/Setup 513.16
38 TestErrorSpam/setup 497.14
47 TestFunctional/serial/StartWithProxy 499.23
49 TestFunctional/serial/SoftStart 366.56
51 TestFunctional/serial/KubectlGetPods 2.19
61 TestFunctional/serial/MinikubeKubectlCmd 2.14
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.09
63 TestFunctional/serial/ExtraConfig 737
64 TestFunctional/serial/ComponentHealth 1.99
67 TestFunctional/serial/InvalidService 0.05
70 TestFunctional/parallel/DashboardCmd 1.77
73 TestFunctional/parallel/StatusCmd 2.99
77 TestFunctional/parallel/ServiceCmdConnect 2.28
79 TestFunctional/parallel/PersistentVolumeClaim 241.57
83 TestFunctional/parallel/MySQL 2.35
89 TestFunctional/parallel/NodeLabels 1.37
103 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
107 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.33
108 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.02
111 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0.07
113 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 92.25
114 TestFunctional/parallel/MountCmd/any-port 2.5
115 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.86
116 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
119 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
120 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
122 TestFunctional/parallel/ServiceCmd/DeployApp 0.05
123 TestFunctional/parallel/ServiceCmd/List 0.28
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.27
125 TestFunctional/parallel/ServiceCmd/HTTPS 0.26
126 TestFunctional/parallel/ServiceCmd/Format 0.29
127 TestFunctional/parallel/ServiceCmd/URL 0.27
141 TestMultiControlPlane/serial/StartCluster 501.92
142 TestMultiControlPlane/serial/DeployApp 93.08
143 TestMultiControlPlane/serial/PingHostFromPods 1.36
144 TestMultiControlPlane/serial/AddWorkerNode 1.53
145 TestMultiControlPlane/serial/NodeLabels 1.33
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.58
147 TestMultiControlPlane/serial/CopyFile 1.56
148 TestMultiControlPlane/serial/StopSecondaryNode 1.62
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.57
150 TestMultiControlPlane/serial/RestartSecondaryNode 49.96
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.62
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 370.02
153 TestMultiControlPlane/serial/DeleteSecondaryNode 1.82
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.54
155 TestMultiControlPlane/serial/StopCluster 1.36
156 TestMultiControlPlane/serial/RestartCluster 368.56
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.55
158 TestMultiControlPlane/serial/AddSecondaryNode 1.48
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.53
163 TestJSONOutput/start/Command 499.93
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestMinikubeProfile 507.21
221 TestMultiNode/serial/ValidateNameConflict 7200.059
x
+
TestAddons/Setup (513.16s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-834039 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-834039 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (8m33.127488746s)

                                                
                                                
-- stdout --
	* [addons-834039] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "addons-834039" primary control-plane node in "addons-834039" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 13:56:24.221412  631118 out.go:360] Setting OutFile to fd 1 ...
	I1006 13:56:24.221686  631118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 13:56:24.221696  631118 out.go:374] Setting ErrFile to fd 2...
	I1006 13:56:24.221700  631118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 13:56:24.221948  631118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 13:56:24.222562  631118 out.go:368] Setting JSON to false
	I1006 13:56:24.223657  631118 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":16720,"bootTime":1759742264,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 13:56:24.223743  631118 start.go:140] virtualization: kvm guest
	I1006 13:56:24.225508  631118 out.go:179] * [addons-834039] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 13:56:24.226743  631118 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 13:56:24.226761  631118 notify.go:220] Checking for updates...
	I1006 13:56:24.229116  631118 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 13:56:24.230072  631118 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 13:56:24.231065  631118 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 13:56:24.231974  631118 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 13:56:24.232903  631118 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 13:56:24.234073  631118 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 13:56:24.256674  631118 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 13:56:24.256790  631118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 13:56:24.311866  631118 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-06 13:56:24.301877414 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 13:56:24.311984  631118 docker.go:318] overlay module found
	I1006 13:56:24.313659  631118 out.go:179] * Using the docker driver based on user configuration
	I1006 13:56:24.314662  631118 start.go:304] selected driver: docker
	I1006 13:56:24.314679  631118 start.go:924] validating driver "docker" against <nil>
	I1006 13:56:24.314695  631118 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 13:56:24.315333  631118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 13:56:24.370050  631118 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-06 13:56:24.360512644 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 13:56:24.370267  631118 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 13:56:24.370537  631118 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 13:56:24.371952  631118 out.go:179] * Using Docker driver with root privileges
	I1006 13:56:24.372906  631118 cni.go:84] Creating CNI manager for ""
	I1006 13:56:24.372975  631118 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 13:56:24.372988  631118 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 13:56:24.373055  631118 start.go:348] cluster config:
	{Name:addons-834039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-834039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1006 13:56:24.374186  631118 out.go:179] * Starting "addons-834039" primary control-plane node in "addons-834039" cluster
	I1006 13:56:24.375098  631118 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 13:56:24.376099  631118 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 13:56:24.376929  631118 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 13:56:24.376963  631118 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 13:56:24.376976  631118 cache.go:58] Caching tarball of preloaded images
	I1006 13:56:24.377024  631118 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 13:56:24.377069  631118 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 13:56:24.377085  631118 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 13:56:24.377439  631118 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/config.json ...
	I1006 13:56:24.377468  631118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/config.json: {Name:mk4892ab73d0e0197d035b9c9275017eb7c16636 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:56:24.393178  631118 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1006 13:56:24.393306  631118 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1006 13:56:24.393333  631118 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1006 13:56:24.393337  631118 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1006 13:56:24.393345  631118 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1006 13:56:24.393352  631118 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1006 13:56:37.278985  631118 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1006 13:56:37.279026  631118 cache.go:232] Successfully downloaded all kic artifacts
	I1006 13:56:37.279074  631118 start.go:360] acquireMachinesLock for addons-834039: {Name:mk877f2f8ab4c31fba536f46121bfb4045a06a10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 13:56:37.279177  631118 start.go:364] duration metric: took 80.45µs to acquireMachinesLock for "addons-834039"
	I1006 13:56:37.279201  631118 start.go:93] Provisioning new machine with config: &{Name:addons-834039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-834039 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 13:56:37.279318  631118 start.go:125] createHost starting for "" (driver="docker")
	I1006 13:56:37.280879  631118 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1006 13:56:37.281129  631118 start.go:159] libmachine.API.Create for "addons-834039" (driver="docker")
	I1006 13:56:37.281169  631118 client.go:168] LocalClient.Create starting
	I1006 13:56:37.281302  631118 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem
	I1006 13:56:37.439522  631118 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem
	I1006 13:56:37.719281  631118 cli_runner.go:164] Run: docker network inspect addons-834039 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 13:56:37.737229  631118 cli_runner.go:211] docker network inspect addons-834039 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 13:56:37.737314  631118 network_create.go:284] running [docker network inspect addons-834039] to gather additional debugging logs...
	I1006 13:56:37.737341  631118 cli_runner.go:164] Run: docker network inspect addons-834039
	W1006 13:56:37.753078  631118 cli_runner.go:211] docker network inspect addons-834039 returned with exit code 1
	I1006 13:56:37.753108  631118 network_create.go:287] error running [docker network inspect addons-834039]: docker network inspect addons-834039: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-834039 not found
	I1006 13:56:37.753125  631118 network_create.go:289] output of [docker network inspect addons-834039]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-834039 not found
	
	** /stderr **
	I1006 13:56:37.753265  631118 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 13:56:37.769368  631118 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cb5190}
	I1006 13:56:37.769403  631118 network_create.go:124] attempt to create docker network addons-834039 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 13:56:37.769447  631118 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-834039 addons-834039
	I1006 13:56:37.823689  631118 network_create.go:108] docker network addons-834039 192.168.49.0/24 created
	I1006 13:56:37.823720  631118 kic.go:121] calculated static IP "192.168.49.2" for the "addons-834039" container
	I1006 13:56:37.823781  631118 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 13:56:37.839748  631118 cli_runner.go:164] Run: docker volume create addons-834039 --label name.minikube.sigs.k8s.io=addons-834039 --label created_by.minikube.sigs.k8s.io=true
	I1006 13:56:37.858146  631118 oci.go:103] Successfully created a docker volume addons-834039
	I1006 13:56:37.858235  631118 cli_runner.go:164] Run: docker run --rm --name addons-834039-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-834039 --entrypoint /usr/bin/test -v addons-834039:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 13:56:41.327611  631118 cli_runner.go:217] Completed: docker run --rm --name addons-834039-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-834039 --entrypoint /usr/bin/test -v addons-834039:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (3.46932228s)
	I1006 13:56:41.327662  631118 oci.go:107] Successfully prepared a docker volume addons-834039
	I1006 13:56:41.327693  631118 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 13:56:41.327720  631118 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 13:56:41.327774  631118 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-834039:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 13:56:45.740841  631118 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-834039:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.413006527s)
	I1006 13:56:45.740873  631118 kic.go:203] duration metric: took 4.413149321s to extract preloaded images to volume ...
	W1006 13:56:45.740963  631118 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1006 13:56:45.741002  631118 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1006 13:56:45.741054  631118 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 13:56:45.800279  631118 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-834039 --name addons-834039 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-834039 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-834039 --network addons-834039 --ip 192.168.49.2 --volume addons-834039:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 13:56:46.063536  631118 cli_runner.go:164] Run: docker container inspect addons-834039 --format={{.State.Running}}
	I1006 13:56:46.081468  631118 cli_runner.go:164] Run: docker container inspect addons-834039 --format={{.State.Status}}
	I1006 13:56:46.098095  631118 cli_runner.go:164] Run: docker exec addons-834039 stat /var/lib/dpkg/alternatives/iptables
	I1006 13:56:46.146474  631118 oci.go:144] the created container "addons-834039" has a running status.
	I1006 13:56:46.146507  631118 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/addons-834039/id_rsa...
	I1006 13:56:46.615448  631118 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-626179/.minikube/machines/addons-834039/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 13:56:46.641379  631118 cli_runner.go:164] Run: docker container inspect addons-834039 --format={{.State.Status}}
	I1006 13:56:46.659006  631118 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 13:56:46.659053  631118 kic_runner.go:114] Args: [docker exec --privileged addons-834039 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 13:56:46.702329  631118 cli_runner.go:164] Run: docker container inspect addons-834039 --format={{.State.Status}}
	I1006 13:56:46.720332  631118 machine.go:93] provisionDockerMachine start ...
	I1006 13:56:46.720426  631118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-834039
	I1006 13:56:46.737068  631118 main.go:141] libmachine: Using SSH client type: native
	I1006 13:56:46.737323  631118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32868 <nil> <nil>}
	I1006 13:56:46.737336  631118 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 13:56:46.879989  631118 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-834039
	
	I1006 13:56:46.880029  631118 ubuntu.go:182] provisioning hostname "addons-834039"
	I1006 13:56:46.880091  631118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-834039
	I1006 13:56:46.897621  631118 main.go:141] libmachine: Using SSH client type: native
	I1006 13:56:46.897834  631118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32868 <nil> <nil>}
	I1006 13:56:46.897850  631118 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-834039 && echo "addons-834039" | sudo tee /etc/hostname
	I1006 13:56:47.049516  631118 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-834039
	
	I1006 13:56:47.049607  631118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-834039
	I1006 13:56:47.067236  631118 main.go:141] libmachine: Using SSH client type: native
	I1006 13:56:47.067494  631118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32868 <nil> <nil>}
	I1006 13:56:47.067514  631118 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-834039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-834039/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-834039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 13:56:47.211373  631118 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 13:56:47.211412  631118 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 13:56:47.211462  631118 ubuntu.go:190] setting up certificates
	I1006 13:56:47.211477  631118 provision.go:84] configureAuth start
	I1006 13:56:47.211557  631118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-834039
	I1006 13:56:47.229561  631118 provision.go:143] copyHostCerts
	I1006 13:56:47.229672  631118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 13:56:47.229799  631118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 13:56:47.229868  631118 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 13:56:47.229929  631118 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.addons-834039 san=[127.0.0.1 192.168.49.2 addons-834039 localhost minikube]
	I1006 13:56:47.268257  631118 provision.go:177] copyRemoteCerts
	I1006 13:56:47.268303  631118 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 13:56:47.268342  631118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-834039
	I1006 13:56:47.285005  631118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32868 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/addons-834039/id_rsa Username:docker}
	I1006 13:56:47.386187  631118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 13:56:47.405152  631118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1006 13:56:47.421546  631118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 13:56:47.437781  631118 provision.go:87] duration metric: took 226.282632ms to configureAuth
	I1006 13:56:47.437810  631118 ubuntu.go:206] setting minikube options for container-runtime
	I1006 13:56:47.437981  631118 config.go:182] Loaded profile config "addons-834039": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 13:56:47.438094  631118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-834039
	I1006 13:56:47.456068  631118 main.go:141] libmachine: Using SSH client type: native
	I1006 13:56:47.456309  631118 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32868 <nil> <nil>}
	I1006 13:56:47.456330  631118 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 13:56:47.708563  631118 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 13:56:47.708596  631118 machine.go:96] duration metric: took 988.241342ms to provisionDockerMachine
	I1006 13:56:47.708610  631118 client.go:171] duration metric: took 10.427429662s to LocalClient.Create
	I1006 13:56:47.708633  631118 start.go:167] duration metric: took 10.427505147s to libmachine.API.Create "addons-834039"
	I1006 13:56:47.708643  631118 start.go:293] postStartSetup for "addons-834039" (driver="docker")
	I1006 13:56:47.708655  631118 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 13:56:47.708726  631118 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 13:56:47.708770  631118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-834039
	I1006 13:56:47.726098  631118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32868 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/addons-834039/id_rsa Username:docker}
	I1006 13:56:47.828853  631118 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 13:56:47.832271  631118 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 13:56:47.832305  631118 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 13:56:47.832319  631118 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 13:56:47.832386  631118 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 13:56:47.832418  631118 start.go:296] duration metric: took 123.768337ms for postStartSetup
	I1006 13:56:47.832786  631118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-834039
	I1006 13:56:47.851083  631118 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/config.json ...
	I1006 13:56:47.851377  631118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 13:56:47.851433  631118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-834039
	I1006 13:56:47.867444  631118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32868 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/addons-834039/id_rsa Username:docker}
	I1006 13:56:47.965095  631118 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 13:56:47.969535  631118 start.go:128] duration metric: took 10.690199033s to createHost
	I1006 13:56:47.969559  631118 start.go:83] releasing machines lock for "addons-834039", held for 10.690370531s
	I1006 13:56:47.969616  631118 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-834039
	I1006 13:56:47.987094  631118 ssh_runner.go:195] Run: cat /version.json
	I1006 13:56:47.987149  631118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-834039
	I1006 13:56:47.987170  631118 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 13:56:47.987253  631118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-834039
	I1006 13:56:48.004885  631118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32868 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/addons-834039/id_rsa Username:docker}
	I1006 13:56:48.005857  631118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32868 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/addons-834039/id_rsa Username:docker}
	I1006 13:56:48.157264  631118 ssh_runner.go:195] Run: systemctl --version
	I1006 13:56:48.163470  631118 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 13:56:48.199118  631118 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 13:56:48.203702  631118 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 13:56:48.203754  631118 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 13:56:48.228263  631118 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 13:56:48.228288  631118 start.go:495] detecting cgroup driver to use...
	I1006 13:56:48.228317  631118 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 13:56:48.228366  631118 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 13:56:48.243819  631118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 13:56:48.255224  631118 docker.go:218] disabling cri-docker service (if available) ...
	I1006 13:56:48.255271  631118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 13:56:48.270822  631118 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 13:56:48.286541  631118 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 13:56:48.362950  631118 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 13:56:48.448312  631118 docker.go:234] disabling docker service ...
	I1006 13:56:48.448379  631118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 13:56:48.467504  631118 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 13:56:48.480121  631118 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 13:56:48.558394  631118 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 13:56:48.637704  631118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 13:56:48.649961  631118 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 13:56:48.663764  631118 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 13:56:48.663826  631118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 13:56:48.673712  631118 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 13:56:48.673764  631118 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 13:56:48.682114  631118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 13:56:48.690231  631118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 13:56:48.698267  631118 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 13:56:48.705863  631118 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 13:56:48.713942  631118 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 13:56:48.726655  631118 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 13:56:48.735172  631118 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 13:56:48.742106  631118 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 13:56:48.748949  631118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 13:56:48.827306  631118 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 13:56:48.925987  631118 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 13:56:48.926067  631118 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 13:56:48.930045  631118 start.go:563] Will wait 60s for crictl version
	I1006 13:56:48.930096  631118 ssh_runner.go:195] Run: which crictl
	I1006 13:56:48.933713  631118 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 13:56:48.959130  631118 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 13:56:48.959255  631118 ssh_runner.go:195] Run: crio --version
	I1006 13:56:48.990193  631118 ssh_runner.go:195] Run: crio --version
	I1006 13:56:49.021910  631118 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 13:56:49.022887  631118 cli_runner.go:164] Run: docker network inspect addons-834039 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 13:56:49.040015  631118 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 13:56:49.044176  631118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 13:56:49.054681  631118 kubeadm.go:883] updating cluster {Name:addons-834039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-834039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 13:56:49.054819  631118 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 13:56:49.054875  631118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 13:56:49.088508  631118 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 13:56:49.088530  631118 crio.go:433] Images already preloaded, skipping extraction
	I1006 13:56:49.088581  631118 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 13:56:49.113601  631118 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 13:56:49.113629  631118 cache_images.go:85] Images are preloaded, skipping loading
	I1006 13:56:49.113637  631118 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 13:56:49.113751  631118 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-834039 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-834039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 13:56:49.113833  631118 ssh_runner.go:195] Run: crio config
	I1006 13:56:49.158824  631118 cni.go:84] Creating CNI manager for ""
	I1006 13:56:49.158852  631118 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 13:56:49.158878  631118 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 13:56:49.158909  631118 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-834039 NodeName:addons-834039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 13:56:49.159075  631118 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-834039"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 13:56:49.159155  631118 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 13:56:49.167467  631118 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 13:56:49.167528  631118 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 13:56:49.175125  631118 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1006 13:56:49.187635  631118 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 13:56:49.202400  631118 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1006 13:56:49.215125  631118 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 13:56:49.218677  631118 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 13:56:49.228449  631118 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 13:56:49.305789  631118 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 13:56:49.330636  631118 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039 for IP: 192.168.49.2
	I1006 13:56:49.330660  631118 certs.go:195] generating shared ca certs ...
	I1006 13:56:49.330682  631118 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:56:49.330822  631118 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 13:56:49.431926  631118 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt ...
	I1006 13:56:49.431963  631118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt: {Name:mk00dc535a15c52172e56817e6cd4e9a9ce46706 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:56:49.432174  631118 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key ...
	I1006 13:56:49.432195  631118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key: {Name:mk47926c68677bade993c8f2d4f70f1bc4491762 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:56:49.432331  631118 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 13:56:49.548637  631118 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt ...
	I1006 13:56:49.548671  631118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt: {Name:mkf9c7db4a53a8f4f39fd3a83bba99588649cea7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:56:49.548880  631118 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key ...
	I1006 13:56:49.548898  631118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key: {Name:mk543f9e7246537f15ecde692a2e2748d4e9fc73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:56:49.549015  631118 certs.go:257] generating profile certs ...
	I1006 13:56:49.549097  631118 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/client.key
	I1006 13:56:49.549118  631118 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/client.crt with IP's: []
	I1006 13:56:49.663242  631118 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/client.crt ...
	I1006 13:56:49.663283  631118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/client.crt: {Name:mkbfe55aa4c5fd16c5fc2a4b5e5c61b7ff8135bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:56:49.663513  631118 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/client.key ...
	I1006 13:56:49.663535  631118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/client.key: {Name:mk015f0355c5fdc5624eb94e6f307d0dd8bd9840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:56:49.663655  631118 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/apiserver.key.7f4f94fb
	I1006 13:56:49.663682  631118 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/apiserver.crt.7f4f94fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1006 13:56:49.818787  631118 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/apiserver.crt.7f4f94fb ...
	I1006 13:56:49.818835  631118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/apiserver.crt.7f4f94fb: {Name:mk4e3bbddad5b72a72cee9ed220025348a28f206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:56:49.819067  631118 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/apiserver.key.7f4f94fb ...
	I1006 13:56:49.819089  631118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/apiserver.key.7f4f94fb: {Name:mkc801a81d99c0e68a87a67928be527939831fef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:56:49.819247  631118 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/apiserver.crt.7f4f94fb -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/apiserver.crt
	I1006 13:56:49.819373  631118 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/apiserver.key.7f4f94fb -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/apiserver.key
	I1006 13:56:49.819467  631118 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/proxy-client.key
	I1006 13:56:49.819498  631118 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/proxy-client.crt with IP's: []
	I1006 13:56:50.175693  631118 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/proxy-client.crt ...
	I1006 13:56:50.175733  631118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/proxy-client.crt: {Name:mk3145204fc79246ed1872674712d7db92dc2e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:56:50.175961  631118 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/proxy-client.key ...
	I1006 13:56:50.175982  631118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/proxy-client.key: {Name:mk17661afc1d964ff3a2f0960977a874b6190400 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:56:50.176239  631118 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 13:56:50.176302  631118 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 13:56:50.176339  631118 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 13:56:50.176372  631118 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 13:56:50.177057  631118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 13:56:50.195418  631118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 13:56:50.213245  631118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 13:56:50.230545  631118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 13:56:50.247773  631118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1006 13:56:50.265138  631118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 13:56:50.282542  631118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 13:56:50.300127  631118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/addons-834039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 13:56:50.318451  631118 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 13:56:50.337390  631118 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 13:56:50.349820  631118 ssh_runner.go:195] Run: openssl version
	I1006 13:56:50.356224  631118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 13:56:50.366781  631118 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 13:56:50.370588  631118 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 13:56:50.370648  631118 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 13:56:50.404751  631118 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 13:56:50.413598  631118 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 13:56:50.417213  631118 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 13:56:50.417275  631118 kubeadm.go:400] StartCluster: {Name:addons-834039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-834039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 13:56:50.417345  631118 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 13:56:50.417393  631118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 13:56:50.445083  631118 cri.go:89] found id: ""
	I1006 13:56:50.445156  631118 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 13:56:50.453407  631118 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 13:56:50.461305  631118 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 13:56:50.461371  631118 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 13:56:50.469066  631118 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 13:56:50.469083  631118 kubeadm.go:157] found existing configuration files:
	
	I1006 13:56:50.469140  631118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 13:56:50.476641  631118 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 13:56:50.476699  631118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 13:56:50.483970  631118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 13:56:50.491272  631118 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 13:56:50.491323  631118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 13:56:50.498402  631118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 13:56:50.505687  631118 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 13:56:50.505750  631118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 13:56:50.513223  631118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 13:56:50.520927  631118 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 13:56:50.520982  631118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 13:56:50.528499  631118 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 13:56:50.567046  631118 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 13:56:50.567118  631118 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 13:56:50.587410  631118 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 13:56:50.587498  631118 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 13:56:50.587542  631118 kubeadm.go:318] OS: Linux
	I1006 13:56:50.587609  631118 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 13:56:50.587670  631118 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 13:56:50.587732  631118 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 13:56:50.587794  631118 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 13:56:50.587856  631118 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 13:56:50.587917  631118 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 13:56:50.587987  631118 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 13:56:50.588038  631118 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 13:56:50.661292  631118 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 13:56:50.661450  631118 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 13:56:50.661586  631118 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 13:56:50.670042  631118 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 13:56:50.673050  631118 out.go:252]   - Generating certificates and keys ...
	I1006 13:56:50.673170  631118 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 13:56:50.673294  631118 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 13:56:50.759247  631118 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 13:56:50.856517  631118 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 13:56:51.082195  631118 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 13:56:51.321403  631118 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 13:56:51.736824  631118 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 13:56:51.736995  631118 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-834039 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 13:56:51.775874  631118 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 13:56:51.776018  631118 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-834039 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 13:56:51.880652  631118 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 13:56:52.197378  631118 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 13:56:52.399733  631118 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 13:56:52.399825  631118 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 13:56:52.499707  631118 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 13:56:52.596628  631118 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 13:56:52.741417  631118 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 13:56:53.240322  631118 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 13:56:53.442596  631118 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 13:56:53.443176  631118 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 13:56:53.446926  631118 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 13:56:53.448837  631118 out.go:252]   - Booting up control plane ...
	I1006 13:56:53.448933  631118 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 13:56:53.449029  631118 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 13:56:53.449289  631118 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 13:56:53.462633  631118 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 13:56:53.462730  631118 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 13:56:53.470439  631118 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 13:56:53.470677  631118 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 13:56:53.470739  631118 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 13:56:53.567165  631118 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 13:56:53.567372  631118 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 13:56:54.568603  631118 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001516043s
	I1006 13:56:54.572106  631118 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 13:56:54.572265  631118 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 13:56:54.572385  631118 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 13:56:54.572508  631118 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:00:54.573232  631118 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000636296s
	I1006 14:00:54.573477  631118 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000795395s
	I1006 14:00:54.573607  631118 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000759943s
	I1006 14:00:54.573624  631118 kubeadm.go:318] 
	I1006 14:00:54.573776  631118 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:00:54.573913  631118 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:00:54.574058  631118 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:00:54.574260  631118 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:00:54.574383  631118 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:00:54.574531  631118 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:00:54.574548  631118 kubeadm.go:318] 
	I1006 14:00:54.577891  631118 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:00:54.578050  631118 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:00:54.578739  631118 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:00:54.578818  631118 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1006 14:00:54.579033  631118 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-834039 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-834039 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001516043s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000636296s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000795395s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000759943s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-834039 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-834039 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001516043s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000636296s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000795395s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000759943s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:00:54.579130  631118 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:00:55.029050  631118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:00:55.041933  631118 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:00:55.041990  631118 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:00:55.049824  631118 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:00:55.049839  631118 kubeadm.go:157] found existing configuration files:
	
	I1006 14:00:55.049886  631118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:00:55.057521  631118 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:00:55.057583  631118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:00:55.065537  631118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:00:55.072840  631118 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:00:55.072896  631118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:00:55.079876  631118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:00:55.087697  631118 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:00:55.087757  631118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:00:55.095333  631118 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:00:55.102958  631118 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:00:55.103017  631118 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:00:55.110518  631118 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:00:55.148194  631118 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:00:55.148288  631118 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:00:55.168791  631118 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:00:55.168857  631118 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:00:55.168921  631118 kubeadm.go:318] OS: Linux
	I1006 14:00:55.169002  631118 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:00:55.169077  631118 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:00:55.169156  631118 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:00:55.169231  631118 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:00:55.169310  631118 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:00:55.169356  631118 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:00:55.169412  631118 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:00:55.169486  631118 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:00:55.228570  631118 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:00:55.228764  631118 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:00:55.228899  631118 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:00:55.235879  631118 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:00:55.239582  631118 out.go:252]   - Generating certificates and keys ...
	I1006 14:00:55.239697  631118 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:00:55.239809  631118 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:00:55.239929  631118 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:00:55.240028  631118 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:00:55.240140  631118 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:00:55.240253  631118 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:00:55.240363  631118 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:00:55.240473  631118 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:00:55.240587  631118 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:00:55.240694  631118 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:00:55.240748  631118 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:00:55.240833  631118 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:00:55.302124  631118 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:00:55.467682  631118 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:00:55.599685  631118 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:00:56.164076  631118 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:00:56.225954  631118 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:00:56.226388  631118 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:00:56.228844  631118 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:00:56.232019  631118 out.go:252]   - Booting up control plane ...
	I1006 14:00:56.232125  631118 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:00:56.232235  631118 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:00:56.232714  631118 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:00:56.246974  631118 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:00:56.247119  631118 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:00:56.253748  631118 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:00:56.253894  631118 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:00:56.253951  631118 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:00:56.362854  631118 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:00:56.363042  631118 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:00:56.863951  631118 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.072641ms
	I1006 14:00:56.866751  631118 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:00:56.866884  631118 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:00:56.867007  631118 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:00:56.867078  631118 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:04:56.868310  631118 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001197039s
	I1006 14:04:56.868437  631118 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001321375s
	I1006 14:04:56.868533  631118 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001276433s
	I1006 14:04:56.868550  631118 kubeadm.go:318] 
	I1006 14:04:56.868693  631118 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:04:56.868821  631118 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:04:56.868951  631118 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:04:56.869082  631118 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:04:56.869179  631118 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:04:56.869323  631118 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:04:56.869336  631118 kubeadm.go:318] 
	I1006 14:04:56.872708  631118 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:04:56.872834  631118 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:04:56.873555  631118 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:04:56.873659  631118 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:04:56.873764  631118 kubeadm.go:402] duration metric: took 8m6.456495066s to StartCluster
	I1006 14:04:56.873842  631118 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:04:56.874022  631118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:04:56.902724  631118 cri.go:89] found id: ""
	I1006 14:04:56.902774  631118 logs.go:282] 0 containers: []
	W1006 14:04:56.902787  631118 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:04:56.902798  631118 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:04:56.902870  631118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:04:56.928771  631118 cri.go:89] found id: ""
	I1006 14:04:56.928797  631118 logs.go:282] 0 containers: []
	W1006 14:04:56.928804  631118 logs.go:284] No container was found matching "etcd"
	I1006 14:04:56.928810  631118 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:04:56.928864  631118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:04:56.954555  631118 cri.go:89] found id: ""
	I1006 14:04:56.954579  631118 logs.go:282] 0 containers: []
	W1006 14:04:56.954588  631118 logs.go:284] No container was found matching "coredns"
	I1006 14:04:56.954595  631118 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:04:56.954651  631118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:04:56.980459  631118 cri.go:89] found id: ""
	I1006 14:04:56.980488  631118 logs.go:282] 0 containers: []
	W1006 14:04:56.980497  631118 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:04:56.980503  631118 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:04:56.980564  631118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:04:57.005245  631118 cri.go:89] found id: ""
	I1006 14:04:57.005279  631118 logs.go:282] 0 containers: []
	W1006 14:04:57.005294  631118 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:04:57.005303  631118 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:04:57.005368  631118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:04:57.032089  631118 cri.go:89] found id: ""
	I1006 14:04:57.032119  631118 logs.go:282] 0 containers: []
	W1006 14:04:57.032132  631118 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:04:57.032140  631118 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:04:57.032223  631118 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:04:57.057651  631118 cri.go:89] found id: ""
	I1006 14:04:57.057680  631118 logs.go:282] 0 containers: []
	W1006 14:04:57.057692  631118 logs.go:284] No container was found matching "kindnet"
	I1006 14:04:57.057708  631118 logs.go:123] Gathering logs for kubelet ...
	I1006 14:04:57.057730  631118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:04:57.123306  631118 logs.go:123] Gathering logs for dmesg ...
	I1006 14:04:57.123348  631118 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:04:57.137012  631118 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:04:57.137045  631118 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:04:57.198269  631118 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:04:57.189885    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:04:57.190477    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:04:57.192286    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:04:57.192824    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:04:57.194440    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:04:57.189885    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:04:57.190477    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:04:57.192286    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:04:57.192824    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:04:57.194440    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:04:57.198295  631118 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:04:57.198308  631118 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:04:57.261301  631118 logs.go:123] Gathering logs for container status ...
	I1006 14:04:57.261333  631118 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1006 14:04:57.290846  631118 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.072641ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001197039s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001321375s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276433s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:04:57.290899  631118 out.go:285] * 
	* 
	W1006 14:04:57.290981  631118 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.072641ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001197039s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001321375s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276433s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.072641ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001197039s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001321375s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276433s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:04:57.290993  631118 out.go:285] * 
	* 
	W1006 14:04:57.292833  631118 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:04:57.296266  631118 out.go:203] 
	W1006 14:04:57.297425  631118 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.072641ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001197039s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001321375s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276433s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.072641ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001197039s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001321375s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001276433s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:04:57.297451  631118 out.go:285] * 
	* 
	I1006 14:04:57.299248  631118 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:110: out/minikube-linux-amd64 start -p addons-834039 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (513.16s)

                                                
                                    
x
+
TestErrorSpam/setup (497.14s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-500584 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-500584 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p nospam-500584 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-500584 --driver=docker  --container-runtime=crio: exit status 80 (8m17.131384803s)

                                                
                                                
-- stdout --
	* [nospam-500584] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "nospam-500584" primary control-plane node in "nospam-500584" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost nospam-500584] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-500584] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00105036s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001063896s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001309338s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001266713s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.012221ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000195256s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000340498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.0005883s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.012221ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000195256s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000340498s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.0005883s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-linux-amd64 start -p nospam-500584 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-500584 --driver=docker  --container-runtime=crio" failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-kubelet-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/server\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/server serving cert is signed for DNS names [localhost nospam-500584] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/peer\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-500584] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/healthcheck-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-etcd-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"sa\" key and public key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 1.00105036s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.001063896s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.001309338s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.001266713s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "X Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 502.012221ms"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000195256s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000340498s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.0005883s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 502.012221ms"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000195256s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000340498s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.0005883s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:110: minikube stdout:
* [nospam-500584] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21701
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "nospam-500584" primary control-plane node in "nospam-500584" cluster
* Pulling base image v0.0.48-1759382731-21643 ...
* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost nospam-500584] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-500584] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.00105036s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.001063896s
[control-plane-check] kube-scheduler is not healthy after 4m0.001309338s
[control-plane-check] kube-controller-manager is not healthy after 4m0.001266713s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.012221ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000195256s
[control-plane-check] kube-apiserver is not healthy after 4m0.000340498s
[control-plane-check] kube-controller-manager is not healthy after 4m0.0005883s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.012221ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000195256s
[control-plane-check] kube-apiserver is not healthy after 4m0.000340498s
[control-plane-check] kube-controller-manager is not healthy after 4m0.0005883s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
--- FAIL: TestErrorSpam/setup (497.14s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (499.23s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-135520 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-135520 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: exit status 80 (8m17.945605993s)

                                                
                                                
-- stdout --
	* [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-135520" primary control-plane node in "functional-135520" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Found network options:
	  - HTTP_PROXY=localhost:41095
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:41095 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-135520 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-135520 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.849078ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000965321s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00113367s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001301986s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.826192ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000539772s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000878282s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000837756s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.826192ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000539772s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000878282s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000837756s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-amd64 start -p functional-135520 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-135520
helpers_test.go:243: (dbg) docker inspect functional-135520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	        "Created": "2025-10-06T14:13:32.283355011Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:13:32.318096257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hosts",
	        "LogPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20-json.log",
	        "Name": "/functional-135520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-135520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-135520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	                "LowerDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-135520",
	                "Source": "/var/lib/docker/volumes/functional-135520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-135520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-135520",
	                "name.minikube.sigs.k8s.io": "functional-135520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6368ffca3e5840f94a34614c511d9f0a0a4ca0d05de4fe1f94c8bfdc332f1a62",
	            "SandboxKey": "/var/run/docker/netns/6368ffca3e58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-135520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d1:94:25:38:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f712be59dd18dac98bed5f234c9f77a39e85277143d6f46285adcd3b0185d552",
	                    "EndpointID": "b816964b653b1b5116e3262dfdc87af272931013ef5b9e2714c9ff7357118a6f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-135520",
	                        "3dd9a226ea42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520: exit status 6 (297.706364ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:21:45.297161  649234 status.go:458] kubeconfig endpoint: get endpoint: "functional-135520" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 logs -n 25
helpers_test.go:260: TestFunctional/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-256452                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-256452   │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │ 06 Oct 25 13:56 UTC │
	│ delete  │ -p download-only-040731                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-040731   │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │ 06 Oct 25 13:56 UTC │
	│ start   │ --download-only -p download-docker-650660 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-650660 │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ delete  │ -p download-docker-650660                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-650660 │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │ 06 Oct 25 13:56 UTC │
	│ start   │ --download-only -p binary-mirror-501421 --alsologtostderr --binary-mirror http://127.0.0.1:36469 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-501421   │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ delete  │ -p binary-mirror-501421                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-501421   │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │ 06 Oct 25 13:56 UTC │
	│ addons  │ enable dashboard -p addons-834039                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-834039          │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-834039                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-834039          │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ start   │ -p addons-834039 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-834039          │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ delete  │ -p addons-834039                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-834039          │ jenkins │ v1.37.0 │ 06 Oct 25 14:04 UTC │ 06 Oct 25 14:04 UTC │
	│ start   │ -p nospam-500584 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-500584 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:04 UTC │                     │
	│ start   │ nospam-500584 --log_dir /tmp/nospam-500584 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ start   │ nospam-500584 --log_dir /tmp/nospam-500584 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ start   │ nospam-500584 --log_dir /tmp/nospam-500584 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ pause   │ nospam-500584 --log_dir /tmp/nospam-500584 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ pause   │ nospam-500584 --log_dir /tmp/nospam-500584 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ pause   │ nospam-500584 --log_dir /tmp/nospam-500584 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ delete  │ -p nospam-500584                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ start   │ -p functional-135520 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-135520      │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:13:27
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:13:27.088909  643815 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:13:27.089178  643815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:13:27.089182  643815 out.go:374] Setting ErrFile to fd 2...
	I1006 14:13:27.089185  643815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:13:27.089402  643815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:13:27.089875  643815 out.go:368] Setting JSON to false
	I1006 14:13:27.090710  643815 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":17743,"bootTime":1759742264,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:13:27.090808  643815 start.go:140] virtualization: kvm guest
	I1006 14:13:27.092973  643815 out.go:179] * [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:13:27.094078  643815 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:13:27.094105  643815 notify.go:220] Checking for updates...
	I1006 14:13:27.096109  643815 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:13:27.097394  643815 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:13:27.098498  643815 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:13:27.099585  643815 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:13:27.100556  643815 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:13:27.101702  643815 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:13:27.125258  643815 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:13:27.125354  643815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:13:27.183168  643815 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:13:27.173792433 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:13:27.183282  643815 docker.go:318] overlay module found
	I1006 14:13:27.184990  643815 out.go:179] * Using the docker driver based on user configuration
	I1006 14:13:27.186196  643815 start.go:304] selected driver: docker
	I1006 14:13:27.186217  643815 start.go:924] validating driver "docker" against <nil>
	I1006 14:13:27.186229  643815 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:13:27.186773  643815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:13:27.246997  643815 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:13:27.236712903 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:13:27.247176  643815 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:13:27.247408  643815 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:13:27.249083  643815 out.go:179] * Using Docker driver with root privileges
	I1006 14:13:27.250259  643815 cni.go:84] Creating CNI manager for ""
	I1006 14:13:27.250316  643815 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:13:27.250324  643815 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 14:13:27.250390  643815 start.go:348] cluster config:
	{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:13:27.251607  643815 out.go:179] * Starting "functional-135520" primary control-plane node in "functional-135520" cluster
	I1006 14:13:27.252618  643815 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:13:27.253692  643815 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:13:27.254532  643815 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:13:27.254555  643815 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:13:27.254561  643815 cache.go:58] Caching tarball of preloaded images
	I1006 14:13:27.254640  643815 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:13:27.254650  643815 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:13:27.254656  643815 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:13:27.255055  643815 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/config.json ...
	I1006 14:13:27.255079  643815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/config.json: {Name:mkaac11a72af70a24e01e60f4b07d16b1efde95a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:13:27.274670  643815 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:13:27.274680  643815 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:13:27.274694  643815 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:13:27.274720  643815 start.go:360] acquireMachinesLock for functional-135520: {Name:mk634323c4619e77647ac9d9aaca492e399526ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:13:27.274801  643815 start.go:364] duration metric: took 69.74µs to acquireMachinesLock for "functional-135520"
	I1006 14:13:27.274818  643815 start.go:93] Provisioning new machine with config: &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:13:27.274875  643815 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:13:27.276559  643815 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1006 14:13:27.276804  643815 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:41095 to docker env.
	I1006 14:13:27.276827  643815 start.go:159] libmachine.API.Create for "functional-135520" (driver="docker")
	I1006 14:13:27.276849  643815 client.go:168] LocalClient.Create starting
	I1006 14:13:27.276893  643815 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem
	I1006 14:13:27.276919  643815 main.go:141] libmachine: Decoding PEM data...
	I1006 14:13:27.276932  643815 main.go:141] libmachine: Parsing certificate...
	I1006 14:13:27.276981  643815 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem
	I1006 14:13:27.276999  643815 main.go:141] libmachine: Decoding PEM data...
	I1006 14:13:27.277006  643815 main.go:141] libmachine: Parsing certificate...
	I1006 14:13:27.277682  643815 cli_runner.go:164] Run: docker network inspect functional-135520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:13:27.294506  643815 cli_runner.go:211] docker network inspect functional-135520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:13:27.294568  643815 network_create.go:284] running [docker network inspect functional-135520] to gather additional debugging logs...
	I1006 14:13:27.294580  643815 cli_runner.go:164] Run: docker network inspect functional-135520
	W1006 14:13:27.310240  643815 cli_runner.go:211] docker network inspect functional-135520 returned with exit code 1
	I1006 14:13:27.310256  643815 network_create.go:287] error running [docker network inspect functional-135520]: docker network inspect functional-135520: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-135520 not found
	I1006 14:13:27.310278  643815 network_create.go:289] output of [docker network inspect functional-135520]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-135520 not found
	
	** /stderr **
	I1006 14:13:27.310375  643815 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:13:27.328981  643815 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c147c0}
	I1006 14:13:27.329022  643815 network_create.go:124] attempt to create docker network functional-135520 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:13:27.329087  643815 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-135520 functional-135520
	I1006 14:13:27.385559  643815 network_create.go:108] docker network functional-135520 192.168.49.0/24 created
	I1006 14:13:27.385585  643815 kic.go:121] calculated static IP "192.168.49.2" for the "functional-135520" container
	I1006 14:13:27.385662  643815 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:13:27.401768  643815 cli_runner.go:164] Run: docker volume create functional-135520 --label name.minikube.sigs.k8s.io=functional-135520 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:13:27.418971  643815 oci.go:103] Successfully created a docker volume functional-135520
	I1006 14:13:27.419032  643815 cli_runner.go:164] Run: docker run --rm --name functional-135520-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-135520 --entrypoint /usr/bin/test -v functional-135520:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:13:27.783424  643815 oci.go:107] Successfully prepared a docker volume functional-135520
	I1006 14:13:27.783454  643815 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:13:27.783485  643815 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:13:27.783560  643815 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-135520:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:13:32.209807  643815 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-135520:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.426196486s)
	I1006 14:13:32.209833  643815 kic.go:203] duration metric: took 4.426345942s to extract preloaded images to volume ...
	W1006 14:13:32.209965  643815 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1006 14:13:32.210025  643815 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1006 14:13:32.210068  643815 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:13:32.266062  643815 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-135520 --name functional-135520 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-135520 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-135520 --network functional-135520 --ip 192.168.49.2 --volume functional-135520:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:13:32.530964  643815 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Running}}
	I1006 14:13:32.550507  643815 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:13:32.568616  643815 cli_runner.go:164] Run: docker exec functional-135520 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:13:32.620867  643815 oci.go:144] the created container "functional-135520" has a running status.
	I1006 14:13:32.620922  643815 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa...
	I1006 14:13:33.237993  643815 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:13:33.262422  643815 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:13:33.281108  643815 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:13:33.281130  643815 kic_runner.go:114] Args: [docker exec --privileged functional-135520 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:13:33.323946  643815 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:13:33.340945  643815 machine.go:93] provisionDockerMachine start ...
	I1006 14:13:33.341041  643815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:13:33.358548  643815 main.go:141] libmachine: Using SSH client type: native
	I1006 14:13:33.358788  643815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:13:33.358794  643815 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:13:33.502196  643815 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:13:33.502229  643815 ubuntu.go:182] provisioning hostname "functional-135520"
	I1006 14:13:33.502293  643815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:13:33.518922  643815 main.go:141] libmachine: Using SSH client type: native
	I1006 14:13:33.519119  643815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:13:33.519126  643815 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-135520 && echo "functional-135520" | sudo tee /etc/hostname
	I1006 14:13:33.670680  643815 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:13:33.670739  643815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:13:33.688961  643815 main.go:141] libmachine: Using SSH client type: native
	I1006 14:13:33.689170  643815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:13:33.689181  643815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-135520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-135520/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-135520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:13:33.831855  643815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:13:33.831876  643815 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:13:33.831910  643815 ubuntu.go:190] setting up certificates
	I1006 14:13:33.831921  643815 provision.go:84] configureAuth start
	I1006 14:13:33.831980  643815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:13:33.849189  643815 provision.go:143] copyHostCerts
	I1006 14:13:33.849269  643815 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:13:33.849277  643815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:13:33.849345  643815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:13:33.849426  643815 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:13:33.849429  643815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:13:33.849452  643815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:13:33.849509  643815 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:13:33.849512  643815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:13:33.849533  643815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:13:33.849577  643815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.functional-135520 san=[127.0.0.1 192.168.49.2 functional-135520 localhost minikube]
	I1006 14:13:34.252468  643815 provision.go:177] copyRemoteCerts
	I1006 14:13:34.252522  643815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:13:34.252565  643815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:13:34.270124  643815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:13:34.372118  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:13:34.390799  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 14:13:34.408718  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:13:34.425445  643815 provision.go:87] duration metric: took 593.511659ms to configureAuth
	I1006 14:13:34.425465  643815 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:13:34.425639  643815 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:13:34.425732  643815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:13:34.442596  643815 main.go:141] libmachine: Using SSH client type: native
	I1006 14:13:34.442819  643815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:13:34.442829  643815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:13:34.695667  643815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:13:34.695682  643815 machine.go:96] duration metric: took 1.354725933s to provisionDockerMachine
	I1006 14:13:34.695691  643815 client.go:171] duration metric: took 7.418837809s to LocalClient.Create
	I1006 14:13:34.695709  643815 start.go:167] duration metric: took 7.418886452s to libmachine.API.Create "functional-135520"
	I1006 14:13:34.695717  643815 start.go:293] postStartSetup for "functional-135520" (driver="docker")
	I1006 14:13:34.695727  643815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:13:34.695799  643815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:13:34.695851  643815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:13:34.713466  643815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:13:34.816763  643815 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:13:34.820385  643815 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:13:34.820412  643815 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:13:34.820422  643815 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:13:34.820474  643815 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:13:34.820571  643815 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:13:34.820648  643815 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> hosts in /etc/test/nested/copy/629719
	I1006 14:13:34.820683  643815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/629719
	I1006 14:13:34.828344  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:13:34.847835  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts --> /etc/test/nested/copy/629719/hosts (40 bytes)
	I1006 14:13:34.864812  643815 start.go:296] duration metric: took 169.081036ms for postStartSetup
	I1006 14:13:34.865247  643815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:13:34.882837  643815 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/config.json ...
	I1006 14:13:34.883075  643815 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:13:34.883110  643815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:13:34.900155  643815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:13:34.999327  643815 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:13:35.004018  643815 start.go:128] duration metric: took 7.729125683s to createHost
	I1006 14:13:35.004040  643815 start.go:83] releasing machines lock for "functional-135520", held for 7.729230951s
	I1006 14:13:35.004126  643815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:13:35.023042  643815 out.go:179] * Found network options:
	I1006 14:13:35.024335  643815 out.go:179]   - HTTP_PROXY=localhost:41095
	W1006 14:13:35.025514  643815 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1006 14:13:35.026623  643815 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1006 14:13:35.027861  643815 ssh_runner.go:195] Run: cat /version.json
	I1006 14:13:35.027897  643815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:13:35.027951  643815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:13:35.028013  643815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:13:35.046472  643815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:13:35.047565  643815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:13:35.199342  643815 ssh_runner.go:195] Run: systemctl --version
	I1006 14:13:35.205834  643815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:13:35.239825  643815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:13:35.244231  643815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:13:35.244299  643815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:13:35.269493  643815 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 14:13:35.269509  643815 start.go:495] detecting cgroup driver to use...
	I1006 14:13:35.269543  643815 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:13:35.269594  643815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:13:35.285675  643815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:13:35.297399  643815 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:13:35.297439  643815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:13:35.312883  643815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:13:35.329459  643815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:13:35.406431  643815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:13:35.490232  643815 docker.go:234] disabling docker service ...
	I1006 14:13:35.490281  643815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:13:35.509460  643815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:13:35.521339  643815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:13:35.601671  643815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:13:35.683859  643815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:13:35.696683  643815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:13:35.710332  643815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:13:35.710413  643815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:13:35.720448  643815 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:13:35.720504  643815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:13:35.728987  643815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:13:35.737566  643815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:13:35.746092  643815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:13:35.754465  643815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:13:35.763253  643815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:13:35.776707  643815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:13:35.785129  643815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:13:35.792342  643815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:13:35.799493  643815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:13:35.875778  643815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:13:35.978255  643815 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:13:35.978322  643815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:13:35.982553  643815 start.go:563] Will wait 60s for crictl version
	I1006 14:13:35.982603  643815 ssh_runner.go:195] Run: which crictl
	I1006 14:13:35.986136  643815 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:13:36.012596  643815 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:13:36.012675  643815 ssh_runner.go:195] Run: crio --version
	I1006 14:13:36.040259  643815 ssh_runner.go:195] Run: crio --version
	I1006 14:13:36.069854  643815 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:13:36.070967  643815 cli_runner.go:164] Run: docker network inspect functional-135520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:13:36.087742  643815 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:13:36.091886  643815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:13:36.102079  643815 kubeadm.go:883] updating cluster {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:13:36.102197  643815 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:13:36.102277  643815 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:13:36.134113  643815 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:13:36.134124  643815 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:13:36.134166  643815 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:13:36.159939  643815 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:13:36.159962  643815 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:13:36.159978  643815 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1006 14:13:36.160090  643815 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-135520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:13:36.160149  643815 ssh_runner.go:195] Run: crio config
	I1006 14:13:36.206391  643815 cni.go:84] Creating CNI manager for ""
	I1006 14:13:36.206408  643815 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:13:36.206429  643815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:13:36.206448  643815 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-135520 NodeName:functional-135520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:13:36.206560  643815 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-135520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:13:36.206616  643815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:13:36.214689  643815 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:13:36.214751  643815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:13:36.222193  643815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 14:13:36.234504  643815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:13:36.248831  643815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1006 14:13:36.260843  643815 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:13:36.264268  643815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:13:36.273663  643815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:13:36.353166  643815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:13:36.377472  643815 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520 for IP: 192.168.49.2
	I1006 14:13:36.377484  643815 certs.go:195] generating shared ca certs ...
	I1006 14:13:36.377508  643815 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:13:36.377666  643815 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:13:36.377700  643815 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:13:36.377706  643815 certs.go:257] generating profile certs ...
	I1006 14:13:36.377792  643815 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key
	I1006 14:13:36.377807  643815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt with IP's: []
	I1006 14:13:36.402926  643815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt ...
	I1006 14:13:36.402945  643815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: {Name:mk90ce0c9aa59a142f42f2a2c1547c2e91b6b33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:13:36.403152  643815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key ...
	I1006 14:13:36.403162  643815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key: {Name:mk9332a0412f1974330afec6ca364eef10c137d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:13:36.403309  643815 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key.72a46e8e
	I1006 14:13:36.403323  643815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt.72a46e8e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1006 14:13:36.461584  643815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt.72a46e8e ...
	I1006 14:13:36.461602  643815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt.72a46e8e: {Name:mk9a49736422eb5010d22397496a7c44c01dc22f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:13:36.461797  643815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key.72a46e8e ...
	I1006 14:13:36.461810  643815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key.72a46e8e: {Name:mk347775e799becbafc301eeeae4711435de491a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:13:36.461923  643815 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt.72a46e8e -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt
	I1006 14:13:36.462035  643815 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key.72a46e8e -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key
	I1006 14:13:36.462104  643815 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key
	I1006 14:13:36.462134  643815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt with IP's: []
	I1006 14:13:36.631489  643815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt ...
	I1006 14:13:36.631507  643815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt: {Name:mkf01e4cc5e541c7b16cebb6787140920f3d4e94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:13:36.631719  643815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key ...
	I1006 14:13:36.631733  643815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key: {Name:mk62edfb78a0528d4b232de44a81fbef78691ea1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:13:36.631951  643815 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:13:36.631989  643815 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:13:36.631995  643815 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:13:36.632016  643815 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:13:36.632035  643815 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:13:36.632053  643815 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:13:36.632088  643815 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:13:36.632717  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:13:36.651105  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:13:36.668172  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:13:36.684714  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:13:36.701158  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:13:36.717838  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:13:36.734300  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:13:36.750537  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:13:36.767091  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:13:36.785653  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:13:36.802545  643815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:13:36.820187  643815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:13:36.832099  643815 ssh_runner.go:195] Run: openssl version
	I1006 14:13:36.837833  643815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:13:36.845728  643815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:13:36.849325  643815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:13:36.849359  643815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:13:36.883387  643815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:13:36.892147  643815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:13:36.900503  643815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:13:36.903978  643815 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:13:36.904016  643815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:13:36.939334  643815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:13:36.947937  643815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:13:36.957079  643815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:13:36.960676  643815 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:13:36.960717  643815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:13:36.994529  643815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:13:37.003167  643815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:13:37.006714  643815 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:13:37.006761  643815 kubeadm.go:400] StartCluster: {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:13:37.006841  643815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:13:37.006886  643815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:13:37.034282  643815 cri.go:89] found id: ""
	I1006 14:13:37.034329  643815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:13:37.042583  643815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:13:37.050080  643815 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:13:37.050126  643815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:13:37.057431  643815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:13:37.057437  643815 kubeadm.go:157] found existing configuration files:
	
	I1006 14:13:37.057470  643815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:13:37.064595  643815 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:13:37.064636  643815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:13:37.071451  643815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:13:37.078528  643815 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:13:37.078570  643815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:13:37.085328  643815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:13:37.092305  643815 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:13:37.092339  643815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:13:37.099317  643815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:13:37.106476  643815 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:13:37.106514  643815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:13:37.114218  643815 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:13:37.150005  643815 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:13:37.150062  643815 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:13:37.178643  643815 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:13:37.178730  643815 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:13:37.178784  643815 kubeadm.go:318] OS: Linux
	I1006 14:13:37.178854  643815 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:13:37.178953  643815 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:13:37.179029  643815 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:13:37.179088  643815 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:13:37.179167  643815 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:13:37.179247  643815 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:13:37.179315  643815 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:13:37.179374  643815 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:13:37.241401  643815 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:13:37.241568  643815 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:13:37.241693  643815 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:13:37.250653  643815 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:13:37.252840  643815 out.go:252]   - Generating certificates and keys ...
	I1006 14:13:37.252941  643815 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:13:37.253033  643815 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:13:37.459508  643815 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:13:38.006078  643815 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:13:38.461723  643815 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:13:38.592798  643815 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:13:38.687430  643815 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:13:38.687591  643815 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [functional-135520 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:13:38.934539  643815 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:13:38.934655  643815 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [functional-135520 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:13:39.054464  643815 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:13:39.266266  643815 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:13:39.486418  643815 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:13:39.486520  643815 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:13:39.813615  643815 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:13:40.271874  643815 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:13:40.837532  643815 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:13:41.015535  643815 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:13:41.233364  643815 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:13:41.233860  643815 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:13:41.237763  643815 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:13:41.240625  643815 out.go:252]   - Booting up control plane ...
	I1006 14:13:41.240709  643815 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:13:41.240772  643815 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:13:41.240825  643815 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:13:41.254399  643815 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:13:41.254534  643815 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:13:41.263026  643815 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:13:41.263369  643815 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:13:41.263428  643815 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:13:41.356053  643815 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:13:41.356238  643815 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:13:41.857807  643815 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.849078ms
	I1006 14:13:41.860685  643815 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:13:41.860766  643815 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1006 14:13:41.860886  643815 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:13:41.860995  643815 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:17:41.862801  643815 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000965321s
	I1006 14:17:41.863130  643815 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00113367s
	I1006 14:17:41.863319  643815 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001301986s
	I1006 14:17:41.863325  643815 kubeadm.go:318] 
	I1006 14:17:41.863543  643815 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:17:41.863811  643815 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:17:41.864034  643815 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:17:41.864333  643815 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:17:41.864515  643815 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:17:41.864690  643815 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:17:41.864700  643815 kubeadm.go:318] 
	I1006 14:17:41.867800  643815 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:17:41.867956  643815 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:17:41.868566  643815 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1006 14:17:41.868620  643815 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1006 14:17:41.868791  643815 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-135520 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-135520 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.849078ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000965321s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00113367s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001301986s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:17:41.868866  643815 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:17:42.325497  643815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:17:42.338615  643815 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:17:42.338664  643815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:17:42.346718  643815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:17:42.346729  643815 kubeadm.go:157] found existing configuration files:
	
	I1006 14:17:42.346772  643815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:17:42.354666  643815 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:17:42.354718  643815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:17:42.362340  643815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:17:42.370269  643815 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:17:42.370328  643815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:17:42.377784  643815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:17:42.385230  643815 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:17:42.385290  643815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:17:42.392434  643815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:17:42.400486  643815 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:17:42.400546  643815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:17:42.408116  643815 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:17:42.446406  643815 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:17:42.446451  643815 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:17:42.466756  643815 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:17:42.466831  643815 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:17:42.466872  643815 kubeadm.go:318] OS: Linux
	I1006 14:17:42.466919  643815 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:17:42.466954  643815 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:17:42.466990  643815 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:17:42.467041  643815 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:17:42.467077  643815 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:17:42.467118  643815 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:17:42.467155  643815 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:17:42.467269  643815 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:17:42.527060  643815 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:17:42.527226  643815 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:17:42.527341  643815 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:17:42.534179  643815 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:17:42.537350  643815 out.go:252]   - Generating certificates and keys ...
	I1006 14:17:42.537426  643815 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:17:42.537518  643815 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:17:42.537603  643815 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:17:42.537655  643815 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:17:42.537740  643815 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:17:42.537819  643815 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:17:42.537916  643815 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:17:42.538006  643815 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:17:42.538062  643815 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:17:42.538132  643815 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:17:42.538178  643815 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:17:42.538251  643815 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:17:42.660038  643815 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:17:42.919919  643815 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:17:43.237121  643815 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:17:43.528234  643815 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:17:43.909922  643815 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:17:43.910367  643815 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:17:43.912530  643815 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:17:43.914596  643815 out.go:252]   - Booting up control plane ...
	I1006 14:17:43.914706  643815 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:17:43.914797  643815 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:17:43.915715  643815 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:17:43.929265  643815 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:17:43.929380  643815 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:17:43.935638  643815 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:17:43.935928  643815 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:17:43.935986  643815 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:17:44.043868  643815 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:17:44.044017  643815 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:17:44.545542  643815 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.826192ms
	I1006 14:17:44.548436  643815 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:17:44.548539  643815 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1006 14:17:44.548664  643815 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:17:44.548731  643815 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:21:44.549917  643815 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000539772s
	I1006 14:21:44.550271  643815 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000878282s
	I1006 14:21:44.550482  643815 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000837756s
	I1006 14:21:44.550515  643815 kubeadm.go:318] 
	I1006 14:21:44.550743  643815 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:21:44.550946  643815 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:21:44.551128  643815 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:21:44.551315  643815 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:21:44.551425  643815 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:21:44.551655  643815 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:21:44.551665  643815 kubeadm.go:318] 
	I1006 14:21:44.554341  643815 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:21:44.554459  643815 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:21:44.554982  643815 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:21:44.555035  643815 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:21:44.555120  643815 kubeadm.go:402] duration metric: took 8m7.548360191s to StartCluster
	I1006 14:21:44.555200  643815 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:21:44.555287  643815 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:21:44.583337  643815 cri.go:89] found id: ""
	I1006 14:21:44.583366  643815 logs.go:282] 0 containers: []
	W1006 14:21:44.583374  643815 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:21:44.583385  643815 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:21:44.583441  643815 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:21:44.610088  643815 cri.go:89] found id: ""
	I1006 14:21:44.610105  643815 logs.go:282] 0 containers: []
	W1006 14:21:44.610112  643815 logs.go:284] No container was found matching "etcd"
	I1006 14:21:44.610116  643815 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:21:44.610170  643815 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:21:44.637071  643815 cri.go:89] found id: ""
	I1006 14:21:44.637086  643815 logs.go:282] 0 containers: []
	W1006 14:21:44.637093  643815 logs.go:284] No container was found matching "coredns"
	I1006 14:21:44.637098  643815 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:21:44.637163  643815 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:21:44.662233  643815 cri.go:89] found id: ""
	I1006 14:21:44.662252  643815 logs.go:282] 0 containers: []
	W1006 14:21:44.662262  643815 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:21:44.662267  643815 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:21:44.662321  643815 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:21:44.687781  643815 cri.go:89] found id: ""
	I1006 14:21:44.687796  643815 logs.go:282] 0 containers: []
	W1006 14:21:44.687802  643815 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:21:44.687807  643815 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:21:44.687854  643815 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:21:44.713360  643815 cri.go:89] found id: ""
	I1006 14:21:44.713380  643815 logs.go:282] 0 containers: []
	W1006 14:21:44.713388  643815 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:21:44.713394  643815 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:21:44.713450  643815 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:21:44.739422  643815 cri.go:89] found id: ""
	I1006 14:21:44.739439  643815 logs.go:282] 0 containers: []
	W1006 14:21:44.739449  643815 logs.go:284] No container was found matching "kindnet"
	I1006 14:21:44.739458  643815 logs.go:123] Gathering logs for kubelet ...
	I1006 14:21:44.739468  643815 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:21:44.808728  643815 logs.go:123] Gathering logs for dmesg ...
	I1006 14:21:44.808752  643815 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:21:44.822498  643815 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:21:44.822526  643815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:21:44.884547  643815 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:21:44.877041    2416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:21:44.877694    2416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:21:44.878768    2416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:21:44.879236    2416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:21:44.880870    2416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:21:44.877041    2416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:21:44.877694    2416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:21:44.878768    2416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:21:44.879236    2416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:21:44.880870    2416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:21:44.884561  643815 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:21:44.884574  643815 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:21:44.946804  643815 logs.go:123] Gathering logs for container status ...
	I1006 14:21:44.946834  643815 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1006 14:21:44.977440  643815 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.826192ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000539772s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000878282s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000837756s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:21:44.977519  643815 out.go:285] * 
	W1006 14:21:44.977599  643815 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.826192ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000539772s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000878282s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000837756s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:21:44.977611  643815 out.go:285] * 
	W1006 14:21:44.979358  643815 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:21:44.982769  643815 out.go:203] 
	W1006 14:21:44.983778  643815 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.826192ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000539772s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000878282s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000837756s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:21:44.983799  643815 out.go:285] * 
	I1006 14:21:44.985688  643815 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:21:40 functional-135520 crio[784]: time="2025-10-06T14:21:40.54118642Z" level=info msg="createCtr: removing container 98315b5aed2128aea1728e6481bb7a55dc8f59dc05c75984de907e059a0e0d2e" id=e027fc14-0e28-4b60-ae8a-52387ea1445a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:21:40 functional-135520 crio[784]: time="2025-10-06T14:21:40.541235727Z" level=info msg="createCtr: deleting container 98315b5aed2128aea1728e6481bb7a55dc8f59dc05c75984de907e059a0e0d2e from storage" id=e027fc14-0e28-4b60-ae8a-52387ea1445a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:21:40 functional-135520 crio[784]: time="2025-10-06T14:21:40.54327583Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-135520_kube-system_5115bd1eba9594a3f2b99b5d6a4b9d59_0" id=e027fc14-0e28-4b60-ae8a-52387ea1445a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.516872762Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=cc278d98-39d9-48e5-aace-7ab338633e49 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.516916605Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=53be2577-2d9c-43ec-92c3-61fd7338f809 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.517737801Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=d1ce0e9c-e4e9-461d-9fb1-7de9168d4f86 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.517789208Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d3d98157-1b0f-4fdd-abaf-47ca27c28983 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.518641631Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-135520/kube-controller-manager" id=7e7e6f2c-1d59-49eb-8a34-b8c1d3a06233 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.518642276Z" level=info msg="Creating container: kube-system/etcd-functional-135520/etcd" id=1da30008-63ed-4ffa-a8ac-e78e58446655 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.518879422Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.518905491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.523504131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.524060279Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.524859457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.525415892Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.543121919Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7e7e6f2c-1d59-49eb-8a34-b8c1d3a06233 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.543958901Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=1da30008-63ed-4ffa-a8ac-e78e58446655 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.544713941Z" level=info msg="createCtr: deleting container ID 56ecf03f739333edc16c0a46206666feaea55a7d4103cd2c3d9725b2148def0b from idIndex" id=7e7e6f2c-1d59-49eb-8a34-b8c1d3a06233 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.544752295Z" level=info msg="createCtr: removing container 56ecf03f739333edc16c0a46206666feaea55a7d4103cd2c3d9725b2148def0b" id=7e7e6f2c-1d59-49eb-8a34-b8c1d3a06233 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.544787675Z" level=info msg="createCtr: deleting container 56ecf03f739333edc16c0a46206666feaea55a7d4103cd2c3d9725b2148def0b from storage" id=7e7e6f2c-1d59-49eb-8a34-b8c1d3a06233 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.545400021Z" level=info msg="createCtr: deleting container ID f916239e80681f3fe8d9f72751cfb1b3d7dd79e87fca62d1eee836f8f381c8ed from idIndex" id=1da30008-63ed-4ffa-a8ac-e78e58446655 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.545432499Z" level=info msg="createCtr: removing container f916239e80681f3fe8d9f72751cfb1b3d7dd79e87fca62d1eee836f8f381c8ed" id=1da30008-63ed-4ffa-a8ac-e78e58446655 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.545461086Z" level=info msg="createCtr: deleting container f916239e80681f3fe8d9f72751cfb1b3d7dd79e87fca62d1eee836f8f381c8ed from storage" id=1da30008-63ed-4ffa-a8ac-e78e58446655 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.548302414Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-135520_kube-system_09d686e340c6809af92c3f18dc65ef21_0" id=7e7e6f2c-1d59-49eb-8a34-b8c1d3a06233 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:21:41 functional-135520 crio[784]: time="2025-10-06T14:21:41.548628179Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-135520_kube-system_f24ebbe4b3fc964d32e35d345c0d3653_0" id=1da30008-63ed-4ffa-a8ac-e78e58446655 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:21:45.882341    2566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:21:45.882920    2566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:21:45.884617    2566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:21:45.885093    2566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:21:45.886324    2566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:21:45 up  5:04,  0 user,  load average: 0.00, 0.05, 0.55
	Linux functional-135520 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:21:40 functional-135520 kubelet[1801]: E1006 14:21:40.543712    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:21:40 functional-135520 kubelet[1801]:         container kube-scheduler start failed in pod kube-scheduler-functional-135520_kube-system(5115bd1eba9594a3f2b99b5d6a4b9d59): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:21:40 functional-135520 kubelet[1801]:  > logger="UnhandledError"
	Oct 06 14:21:40 functional-135520 kubelet[1801]: E1006 14:21:40.543758    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-135520" podUID="5115bd1eba9594a3f2b99b5d6a4b9d59"
	Oct 06 14:21:41 functional-135520 kubelet[1801]: E1006 14:21:41.142064    1801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-135520?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 06 14:21:41 functional-135520 kubelet[1801]: I1006 14:21:41.298141    1801 kubelet_node_status.go:75] "Attempting to register node" node="functional-135520"
	Oct 06 14:21:41 functional-135520 kubelet[1801]: E1006 14:21:41.298579    1801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-135520"
	Oct 06 14:21:41 functional-135520 kubelet[1801]: E1006 14:21:41.516384    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:21:41 functional-135520 kubelet[1801]: E1006 14:21:41.516526    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:21:41 functional-135520 kubelet[1801]: E1006 14:21:41.548622    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:21:41 functional-135520 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:21:41 functional-135520 kubelet[1801]:  > podSandboxID="9bdc0d5e26867b5c10d88c001a4330075bcd5aaeba5b14e7f6f5bdc2fd378eb4"
	Oct 06 14:21:41 functional-135520 kubelet[1801]: E1006 14:21:41.548740    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:21:41 functional-135520 kubelet[1801]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-135520_kube-system(09d686e340c6809af92c3f18dc65ef21): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:21:41 functional-135520 kubelet[1801]:  > logger="UnhandledError"
	Oct 06 14:21:41 functional-135520 kubelet[1801]: E1006 14:21:41.548782    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-135520" podUID="09d686e340c6809af92c3f18dc65ef21"
	Oct 06 14:21:41 functional-135520 kubelet[1801]: E1006 14:21:41.548908    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:21:41 functional-135520 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:21:41 functional-135520 kubelet[1801]:  > podSandboxID="f122bf3cdcc12aa8e4b9a0e1655bceae045fdc99afe781ed4e5deffc77adf21d"
	Oct 06 14:21:41 functional-135520 kubelet[1801]: E1006 14:21:41.548986    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:21:41 functional-135520 kubelet[1801]:         container etcd start failed in pod etcd-functional-135520_kube-system(f24ebbe4b3fc964d32e35d345c0d3653): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:21:41 functional-135520 kubelet[1801]:  > logger="UnhandledError"
	Oct 06 14:21:41 functional-135520 kubelet[1801]: E1006 14:21:41.550119    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-135520" podUID="f24ebbe4b3fc964d32e35d345c0d3653"
	Oct 06 14:21:44 functional-135520 kubelet[1801]: E1006 14:21:44.527520    1801 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-135520\" not found"
	Oct 06 14:21:45 functional-135520 kubelet[1801]: E1006 14:21:45.340722    1801 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-135520.186beca30fea008b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-135520,UID:functional-135520,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-135520 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-135520,},FirstTimestamp:2025-10-06 14:17:44.509128843 +0000 UTC m=+0.464938753,LastTimestamp:2025-10-06 14:17:44.509128843 +0000 UTC m=+0.464938753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-135520,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520: exit status 6 (293.245761ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:21:46.260711  649563 status.go:458] kubeconfig endpoint: get endpoint: "functional-135520" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-135520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (499.23s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (366.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1006 14:21:46.277681  629719 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-135520 --alsologtostderr -v=8
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-135520 --alsologtostderr -v=8: exit status 80 (6m3.922798265s)

                                                
                                                
-- stdout --
	* [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-135520" primary control-plane node in "functional-135520" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:21:46.323016  649678 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:21:46.323271  649678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:21:46.323279  649678 out.go:374] Setting ErrFile to fd 2...
	I1006 14:21:46.323283  649678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:21:46.323475  649678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:21:46.323908  649678 out.go:368] Setting JSON to false
	I1006 14:21:46.324826  649678 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18242,"bootTime":1759742264,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:21:46.324926  649678 start.go:140] virtualization: kvm guest
	I1006 14:21:46.326925  649678 out.go:179] * [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:21:46.327942  649678 notify.go:220] Checking for updates...
	I1006 14:21:46.327965  649678 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:21:46.329155  649678 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:21:46.330229  649678 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:46.331298  649678 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:21:46.332353  649678 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:21:46.333341  649678 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:21:46.334666  649678 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:21:46.334805  649678 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:21:46.359710  649678 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:21:46.359861  649678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:21:46.415678  649678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:21:46.405264016 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:21:46.415787  649678 docker.go:318] overlay module found
	I1006 14:21:46.417155  649678 out.go:179] * Using the docker driver based on existing profile
	I1006 14:21:46.418292  649678 start.go:304] selected driver: docker
	I1006 14:21:46.418308  649678 start.go:924] validating driver "docker" against &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:46.418380  649678 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:21:46.418468  649678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:21:46.473903  649678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:21:46.464043789 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:21:46.474648  649678 cni.go:84] Creating CNI manager for ""
	I1006 14:21:46.474719  649678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:21:46.474770  649678 start.go:348] cluster config:
	{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:46.476311  649678 out.go:179] * Starting "functional-135520" primary control-plane node in "functional-135520" cluster
	I1006 14:21:46.477235  649678 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:21:46.478074  649678 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:21:46.479119  649678 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:21:46.479164  649678 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:21:46.479185  649678 cache.go:58] Caching tarball of preloaded images
	I1006 14:21:46.479228  649678 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:21:46.479294  649678 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:21:46.479309  649678 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:21:46.479413  649678 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/config.json ...
	I1006 14:21:46.499695  649678 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:21:46.499723  649678 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:21:46.499744  649678 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:21:46.499779  649678 start.go:360] acquireMachinesLock for functional-135520: {Name:mk634323c4619e77647ac9d9aaca492e399526ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:21:46.499864  649678 start.go:364] duration metric: took 47.895µs to acquireMachinesLock for "functional-135520"
	I1006 14:21:46.499886  649678 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:21:46.499892  649678 fix.go:54] fixHost starting: 
	I1006 14:21:46.500243  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:46.517601  649678 fix.go:112] recreateIfNeeded on functional-135520: state=Running err=<nil>
	W1006 14:21:46.517640  649678 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:21:46.519112  649678 out.go:252] * Updating the running docker "functional-135520" container ...
	I1006 14:21:46.519143  649678 machine.go:93] provisionDockerMachine start ...
	I1006 14:21:46.519223  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:46.537175  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:46.537424  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:46.537438  649678 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:21:46.682374  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:21:46.682420  649678 ubuntu.go:182] provisioning hostname "functional-135520"
	I1006 14:21:46.682484  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:46.700103  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:46.700382  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:46.700401  649678 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-135520 && echo "functional-135520" | sudo tee /etc/hostname
	I1006 14:21:46.853845  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:21:46.853924  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:46.872015  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:46.872265  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:46.872284  649678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-135520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-135520/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-135520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:21:47.017154  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:21:47.017184  649678 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:21:47.017239  649678 ubuntu.go:190] setting up certificates
	I1006 14:21:47.017253  649678 provision.go:84] configureAuth start
	I1006 14:21:47.017340  649678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:21:47.035104  649678 provision.go:143] copyHostCerts
	I1006 14:21:47.035140  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:21:47.035175  649678 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:21:47.035198  649678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:21:47.035336  649678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:21:47.035448  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:21:47.035468  649678 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:21:47.035478  649678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:21:47.035513  649678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:21:47.035575  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:21:47.035593  649678 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:21:47.035599  649678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:21:47.035623  649678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:21:47.035688  649678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.functional-135520 san=[127.0.0.1 192.168.49.2 functional-135520 localhost minikube]
	I1006 14:21:47.332166  649678 provision.go:177] copyRemoteCerts
	I1006 14:21:47.332258  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:21:47.332304  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.351185  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:47.453191  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:21:47.453264  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:21:47.470840  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:21:47.470907  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 14:21:47.487466  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:21:47.487518  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:21:47.504343  649678 provision.go:87] duration metric: took 487.07429ms to configureAuth
	I1006 14:21:47.504374  649678 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:21:47.504541  649678 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:21:47.504639  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.523029  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:47.523280  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:47.523307  649678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:21:47.788227  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:21:47.788259  649678 machine.go:96] duration metric: took 1.269106143s to provisionDockerMachine
	I1006 14:21:47.788275  649678 start.go:293] postStartSetup for "functional-135520" (driver="docker")
	I1006 14:21:47.788290  649678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:21:47.788372  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:21:47.788428  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.805850  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:47.908894  649678 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:21:47.912773  649678 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1006 14:21:47.912795  649678 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1006 14:21:47.912801  649678 command_runner.go:130] > VERSION_ID="12"
	I1006 14:21:47.912807  649678 command_runner.go:130] > VERSION="12 (bookworm)"
	I1006 14:21:47.912813  649678 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1006 14:21:47.912819  649678 command_runner.go:130] > ID=debian
	I1006 14:21:47.912827  649678 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1006 14:21:47.912834  649678 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1006 14:21:47.912843  649678 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1006 14:21:47.912900  649678 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:21:47.912919  649678 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:21:47.912929  649678 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:21:47.912988  649678 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:21:47.913065  649678 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:21:47.913078  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:21:47.913143  649678 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> hosts in /etc/test/nested/copy/629719
	I1006 14:21:47.913151  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> /etc/test/nested/copy/629719/hosts
	I1006 14:21:47.913182  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/629719
	I1006 14:21:47.920839  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:21:47.937786  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts --> /etc/test/nested/copy/629719/hosts (40 bytes)
	I1006 14:21:47.954760  649678 start.go:296] duration metric: took 166.455369ms for postStartSetup
	I1006 14:21:47.954834  649678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:21:47.954870  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.972368  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:48.072535  649678 command_runner.go:130] > 38%
	I1006 14:21:48.072624  649678 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:21:48.077267  649678 command_runner.go:130] > 182G
	I1006 14:21:48.077574  649678 fix.go:56] duration metric: took 1.577678011s for fixHost
	I1006 14:21:48.077595  649678 start.go:83] releasing machines lock for "functional-135520", held for 1.577717734s
	I1006 14:21:48.077675  649678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:21:48.095670  649678 ssh_runner.go:195] Run: cat /version.json
	I1006 14:21:48.095722  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:48.095754  649678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:21:48.095827  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:48.113591  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:48.115313  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:48.268773  649678 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1006 14:21:48.268839  649678 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1006 14:21:48.268953  649678 ssh_runner.go:195] Run: systemctl --version
	I1006 14:21:48.275683  649678 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1006 14:21:48.275717  649678 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1006 14:21:48.275778  649678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:21:48.311695  649678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1006 14:21:48.316662  649678 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1006 14:21:48.316719  649678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:21:48.316778  649678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:21:48.324682  649678 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:21:48.324705  649678 start.go:495] detecting cgroup driver to use...
	I1006 14:21:48.324740  649678 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:21:48.324780  649678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:21:48.339343  649678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:21:48.350971  649678 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:21:48.351020  649678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:21:48.364377  649678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:21:48.375810  649678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:21:48.466998  649678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:21:48.555437  649678 docker.go:234] disabling docker service ...
	I1006 14:21:48.555507  649678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:21:48.569642  649678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:21:48.581371  649678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:21:48.660341  649678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:21:48.745051  649678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:21:48.757689  649678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:21:48.770829  649678 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1006 14:21:48.771733  649678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:21:48.771806  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.781084  649678 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:21:48.781164  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.790125  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.798751  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.807637  649678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:21:48.815986  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.824650  649678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.832873  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.841368  649678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:21:48.847999  649678 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1006 14:21:48.848646  649678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:21:48.855735  649678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:48.941247  649678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:21:49.054732  649678 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:21:49.054813  649678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:21:49.059042  649678 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1006 14:21:49.059070  649678 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1006 14:21:49.059079  649678 command_runner.go:130] > Device: 0,59	Inode: 3845        Links: 1
	I1006 14:21:49.059086  649678 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 14:21:49.059091  649678 command_runner.go:130] > Access: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059104  649678 command_runner.go:130] > Modify: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059109  649678 command_runner.go:130] > Change: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059113  649678 command_runner.go:130] >  Birth: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059133  649678 start.go:563] Will wait 60s for crictl version
	I1006 14:21:49.059181  649678 ssh_runner.go:195] Run: which crictl
	I1006 14:21:49.062689  649678 command_runner.go:130] > /usr/local/bin/crictl
	I1006 14:21:49.062764  649678 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:21:49.086605  649678 command_runner.go:130] > Version:  0.1.0
	I1006 14:21:49.086623  649678 command_runner.go:130] > RuntimeName:  cri-o
	I1006 14:21:49.086627  649678 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1006 14:21:49.086632  649678 command_runner.go:130] > RuntimeApiVersion:  v1
	I1006 14:21:49.088423  649678 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:21:49.088499  649678 ssh_runner.go:195] Run: crio --version
	I1006 14:21:49.118625  649678 command_runner.go:130] > crio version 1.34.1
	I1006 14:21:49.118652  649678 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1006 14:21:49.118659  649678 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1006 14:21:49.118666  649678 command_runner.go:130] >    GitTreeState:   dirty
	I1006 14:21:49.118672  649678 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1006 14:21:49.118678  649678 command_runner.go:130] >    GoVersion:      go1.24.6
	I1006 14:21:49.118683  649678 command_runner.go:130] >    Compiler:       gc
	I1006 14:21:49.118692  649678 command_runner.go:130] >    Platform:       linux/amd64
	I1006 14:21:49.118700  649678 command_runner.go:130] >    Linkmode:       static
	I1006 14:21:49.118708  649678 command_runner.go:130] >    BuildTags:
	I1006 14:21:49.118718  649678 command_runner.go:130] >      static
	I1006 14:21:49.118724  649678 command_runner.go:130] >      netgo
	I1006 14:21:49.118729  649678 command_runner.go:130] >      osusergo
	I1006 14:21:49.118739  649678 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1006 14:21:49.118745  649678 command_runner.go:130] >      seccomp
	I1006 14:21:49.118749  649678 command_runner.go:130] >      apparmor
	I1006 14:21:49.118753  649678 command_runner.go:130] >      selinux
	I1006 14:21:49.118757  649678 command_runner.go:130] >    LDFlags:          unknown
	I1006 14:21:49.118781  649678 command_runner.go:130] >    SeccompEnabled:   true
	I1006 14:21:49.118789  649678 command_runner.go:130] >    AppArmorEnabled:  false
	I1006 14:21:49.118869  649678 ssh_runner.go:195] Run: crio --version
	I1006 14:21:49.147173  649678 command_runner.go:130] > crio version 1.34.1
	I1006 14:21:49.147230  649678 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1006 14:21:49.147241  649678 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1006 14:21:49.147249  649678 command_runner.go:130] >    GitTreeState:   dirty
	I1006 14:21:49.147257  649678 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1006 14:21:49.147263  649678 command_runner.go:130] >    GoVersion:      go1.24.6
	I1006 14:21:49.147267  649678 command_runner.go:130] >    Compiler:       gc
	I1006 14:21:49.147283  649678 command_runner.go:130] >    Platform:       linux/amd64
	I1006 14:21:49.147292  649678 command_runner.go:130] >    Linkmode:       static
	I1006 14:21:49.147296  649678 command_runner.go:130] >    BuildTags:
	I1006 14:21:49.147299  649678 command_runner.go:130] >      static
	I1006 14:21:49.147303  649678 command_runner.go:130] >      netgo
	I1006 14:21:49.147309  649678 command_runner.go:130] >      osusergo
	I1006 14:21:49.147313  649678 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1006 14:21:49.147320  649678 command_runner.go:130] >      seccomp
	I1006 14:21:49.147324  649678 command_runner.go:130] >      apparmor
	I1006 14:21:49.147330  649678 command_runner.go:130] >      selinux
	I1006 14:21:49.147334  649678 command_runner.go:130] >    LDFlags:          unknown
	I1006 14:21:49.147340  649678 command_runner.go:130] >    SeccompEnabled:   true
	I1006 14:21:49.147443  649678 command_runner.go:130] >    AppArmorEnabled:  false
	I1006 14:21:49.149760  649678 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:21:49.150923  649678 cli_runner.go:164] Run: docker network inspect functional-135520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:21:49.168305  649678 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:21:49.172524  649678 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1006 14:21:49.172624  649678 kubeadm.go:883] updating cluster {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:21:49.172735  649678 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:21:49.172777  649678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:21:49.203555  649678 command_runner.go:130] > {
	I1006 14:21:49.203573  649678 command_runner.go:130] >   "images":  [
	I1006 14:21:49.203577  649678 command_runner.go:130] >     {
	I1006 14:21:49.203585  649678 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1006 14:21:49.203589  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203596  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1006 14:21:49.203599  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203603  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203613  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1006 14:21:49.203619  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1006 14:21:49.203623  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203628  649678 command_runner.go:130] >       "size":  "109379124",
	I1006 14:21:49.203634  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203641  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203647  649678 command_runner.go:130] >     },
	I1006 14:21:49.203650  649678 command_runner.go:130] >     {
	I1006 14:21:49.203656  649678 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1006 14:21:49.203660  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203665  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1006 14:21:49.203671  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203676  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203684  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1006 14:21:49.203694  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1006 14:21:49.203697  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203701  649678 command_runner.go:130] >       "size":  "31470524",
	I1006 14:21:49.203705  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203716  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203722  649678 command_runner.go:130] >     },
	I1006 14:21:49.203725  649678 command_runner.go:130] >     {
	I1006 14:21:49.203731  649678 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1006 14:21:49.203737  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203742  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1006 14:21:49.203748  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203752  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203759  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1006 14:21:49.203768  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1006 14:21:49.203771  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203775  649678 command_runner.go:130] >       "size":  "76103547",
	I1006 14:21:49.203779  649678 command_runner.go:130] >       "username":  "nonroot",
	I1006 14:21:49.203783  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203785  649678 command_runner.go:130] >     },
	I1006 14:21:49.203789  649678 command_runner.go:130] >     {
	I1006 14:21:49.203794  649678 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1006 14:21:49.203799  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203804  649678 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1006 14:21:49.203807  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203811  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203817  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1006 14:21:49.203826  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1006 14:21:49.203829  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203836  649678 command_runner.go:130] >       "size":  "195976448",
	I1006 14:21:49.203840  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.203844  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.203847  649678 command_runner.go:130] >       },
	I1006 14:21:49.203855  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203861  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203864  649678 command_runner.go:130] >     },
	I1006 14:21:49.203867  649678 command_runner.go:130] >     {
	I1006 14:21:49.203873  649678 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1006 14:21:49.203879  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203884  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1006 14:21:49.203887  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203891  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203901  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1006 14:21:49.203907  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1006 14:21:49.203913  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203916  649678 command_runner.go:130] >       "size":  "89046001",
	I1006 14:21:49.203920  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.203925  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.203928  649678 command_runner.go:130] >       },
	I1006 14:21:49.203931  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203935  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203938  649678 command_runner.go:130] >     },
	I1006 14:21:49.203941  649678 command_runner.go:130] >     {
	I1006 14:21:49.203947  649678 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1006 14:21:49.203953  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203958  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1006 14:21:49.203961  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203965  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203972  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1006 14:21:49.203981  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1006 14:21:49.203984  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203988  649678 command_runner.go:130] >       "size":  "76004181",
	I1006 14:21:49.203992  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.203998  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.204001  649678 command_runner.go:130] >       },
	I1006 14:21:49.204005  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204011  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.204014  649678 command_runner.go:130] >     },
	I1006 14:21:49.204019  649678 command_runner.go:130] >     {
	I1006 14:21:49.204024  649678 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1006 14:21:49.204028  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.204033  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1006 14:21:49.204036  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204042  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.204055  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1006 14:21:49.204067  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1006 14:21:49.204073  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204078  649678 command_runner.go:130] >       "size":  "73138073",
	I1006 14:21:49.204081  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204085  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.204089  649678 command_runner.go:130] >     },
	I1006 14:21:49.204092  649678 command_runner.go:130] >     {
	I1006 14:21:49.204097  649678 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1006 14:21:49.204104  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.204108  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1006 14:21:49.204112  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204116  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.204123  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1006 14:21:49.204153  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1006 14:21:49.204160  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204164  649678 command_runner.go:130] >       "size":  "53844823",
	I1006 14:21:49.204167  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.204170  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.204174  649678 command_runner.go:130] >       },
	I1006 14:21:49.204178  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204183  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.204188  649678 command_runner.go:130] >     },
	I1006 14:21:49.204191  649678 command_runner.go:130] >     {
	I1006 14:21:49.204197  649678 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1006 14:21:49.204222  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.204230  649678 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1006 14:21:49.204237  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204243  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.204253  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1006 14:21:49.204260  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1006 14:21:49.204266  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204269  649678 command_runner.go:130] >       "size":  "742092",
	I1006 14:21:49.204273  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.204277  649678 command_runner.go:130] >         "value":  "65535"
	I1006 14:21:49.204280  649678 command_runner.go:130] >       },
	I1006 14:21:49.204284  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204288  649678 command_runner.go:130] >       "pinned":  true
	I1006 14:21:49.204291  649678 command_runner.go:130] >     }
	I1006 14:21:49.204294  649678 command_runner.go:130] >   ]
	I1006 14:21:49.204299  649678 command_runner.go:130] > }
	I1006 14:21:49.205550  649678 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:21:49.205570  649678 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:21:49.205618  649678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:21:49.229611  649678 command_runner.go:130] > {
	I1006 14:21:49.229630  649678 command_runner.go:130] >   "images":  [
	I1006 14:21:49.229637  649678 command_runner.go:130] >     {
	I1006 14:21:49.229647  649678 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1006 14:21:49.229656  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.229664  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1006 14:21:49.229669  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229675  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.229690  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1006 14:21:49.229706  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1006 14:21:49.229712  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229738  649678 command_runner.go:130] >       "size":  "109379124",
	I1006 14:21:49.229748  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.229755  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.229761  649678 command_runner.go:130] >     },
	I1006 14:21:49.229770  649678 command_runner.go:130] >     {
	I1006 14:21:49.229780  649678 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1006 14:21:49.229789  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.229799  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1006 14:21:49.229807  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229814  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.229830  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1006 14:21:49.229846  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1006 14:21:49.229854  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229863  649678 command_runner.go:130] >       "size":  "31470524",
	I1006 14:21:49.229872  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.229894  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.229902  649678 command_runner.go:130] >     },
	I1006 14:21:49.229907  649678 command_runner.go:130] >     {
	I1006 14:21:49.229918  649678 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1006 14:21:49.229927  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.229936  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1006 14:21:49.229943  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229951  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.229965  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1006 14:21:49.229980  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1006 14:21:49.229999  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230007  649678 command_runner.go:130] >       "size":  "76103547",
	I1006 14:21:49.230016  649678 command_runner.go:130] >       "username":  "nonroot",
	I1006 14:21:49.230023  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230031  649678 command_runner.go:130] >     },
	I1006 14:21:49.230036  649678 command_runner.go:130] >     {
	I1006 14:21:49.230050  649678 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1006 14:21:49.230059  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230068  649678 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1006 14:21:49.230076  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230083  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230097  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1006 14:21:49.230112  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1006 14:21:49.230119  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230127  649678 command_runner.go:130] >       "size":  "195976448",
	I1006 14:21:49.230135  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230143  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230152  649678 command_runner.go:130] >       },
	I1006 14:21:49.230165  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230175  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230181  649678 command_runner.go:130] >     },
	I1006 14:21:49.230189  649678 command_runner.go:130] >     {
	I1006 14:21:49.230220  649678 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1006 14:21:49.230239  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230249  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1006 14:21:49.230257  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230264  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230279  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1006 14:21:49.230306  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1006 14:21:49.230314  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230321  649678 command_runner.go:130] >       "size":  "89046001",
	I1006 14:21:49.230329  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230336  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230345  649678 command_runner.go:130] >       },
	I1006 14:21:49.230352  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230361  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230367  649678 command_runner.go:130] >     },
	I1006 14:21:49.230375  649678 command_runner.go:130] >     {
	I1006 14:21:49.230386  649678 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1006 14:21:49.230395  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230406  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1006 14:21:49.230414  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230421  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230436  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1006 14:21:49.230451  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1006 14:21:49.230460  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230467  649678 command_runner.go:130] >       "size":  "76004181",
	I1006 14:21:49.230484  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230493  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230500  649678 command_runner.go:130] >       },
	I1006 14:21:49.230507  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230516  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230523  649678 command_runner.go:130] >     },
	I1006 14:21:49.230529  649678 command_runner.go:130] >     {
	I1006 14:21:49.230542  649678 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1006 14:21:49.230549  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230568  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1006 14:21:49.230576  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230583  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230599  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1006 14:21:49.230614  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1006 14:21:49.230621  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230628  649678 command_runner.go:130] >       "size":  "73138073",
	I1006 14:21:49.230637  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230645  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230653  649678 command_runner.go:130] >     },
	I1006 14:21:49.230658  649678 command_runner.go:130] >     {
	I1006 14:21:49.230665  649678 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1006 14:21:49.230670  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230679  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1006 14:21:49.230687  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230693  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230706  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1006 14:21:49.230734  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1006 14:21:49.230745  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230751  649678 command_runner.go:130] >       "size":  "53844823",
	I1006 14:21:49.230758  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230767  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230773  649678 command_runner.go:130] >       },
	I1006 14:21:49.230783  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230791  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230799  649678 command_runner.go:130] >     },
	I1006 14:21:49.230805  649678 command_runner.go:130] >     {
	I1006 14:21:49.230819  649678 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1006 14:21:49.230828  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230837  649678 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1006 14:21:49.230845  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230852  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230865  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1006 14:21:49.230878  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1006 14:21:49.230887  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230894  649678 command_runner.go:130] >       "size":  "742092",
	I1006 14:21:49.230902  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230909  649678 command_runner.go:130] >         "value":  "65535"
	I1006 14:21:49.230918  649678 command_runner.go:130] >       },
	I1006 14:21:49.230924  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230934  649678 command_runner.go:130] >       "pinned":  true
	I1006 14:21:49.230940  649678 command_runner.go:130] >     }
	I1006 14:21:49.230948  649678 command_runner.go:130] >   ]
	I1006 14:21:49.230953  649678 command_runner.go:130] > }
	I1006 14:21:49.231845  649678 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:21:49.231866  649678 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:21:49.231873  649678 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1006 14:21:49.232021  649678 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-135520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:21:49.232106  649678 ssh_runner.go:195] Run: crio config
	I1006 14:21:49.273258  649678 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1006 14:21:49.273298  649678 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1006 14:21:49.273306  649678 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1006 14:21:49.273309  649678 command_runner.go:130] > #
	I1006 14:21:49.273321  649678 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1006 14:21:49.273332  649678 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1006 14:21:49.273343  649678 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1006 14:21:49.273357  649678 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1006 14:21:49.273367  649678 command_runner.go:130] > # reload'.
	I1006 14:21:49.273377  649678 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1006 14:21:49.273389  649678 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1006 14:21:49.273403  649678 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1006 14:21:49.273413  649678 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1006 14:21:49.273423  649678 command_runner.go:130] > [crio]
	I1006 14:21:49.273433  649678 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1006 14:21:49.273446  649678 command_runner.go:130] > # containers images, in this directory.
	I1006 14:21:49.273471  649678 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1006 14:21:49.273486  649678 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1006 14:21:49.273494  649678 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1006 14:21:49.273512  649678 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1006 14:21:49.273525  649678 command_runner.go:130] > # imagestore = ""
	I1006 14:21:49.273535  649678 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1006 14:21:49.273548  649678 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1006 14:21:49.273561  649678 command_runner.go:130] > # storage_driver = "overlay"
	I1006 14:21:49.273574  649678 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1006 14:21:49.273591  649678 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1006 14:21:49.273599  649678 command_runner.go:130] > # storage_option = [
	I1006 14:21:49.273613  649678 command_runner.go:130] > # ]
	I1006 14:21:49.273623  649678 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1006 14:21:49.273635  649678 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1006 14:21:49.273642  649678 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1006 14:21:49.273652  649678 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1006 14:21:49.273664  649678 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1006 14:21:49.273678  649678 command_runner.go:130] > # always happen on a node reboot
	I1006 14:21:49.273690  649678 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1006 14:21:49.273712  649678 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1006 14:21:49.273725  649678 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1006 14:21:49.273743  649678 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1006 14:21:49.273751  649678 command_runner.go:130] > # version_file_persist = ""
	I1006 14:21:49.273764  649678 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1006 14:21:49.273781  649678 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1006 14:21:49.273792  649678 command_runner.go:130] > # internal_wipe = true
	I1006 14:21:49.273806  649678 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1006 14:21:49.273819  649678 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1006 14:21:49.273829  649678 command_runner.go:130] > # internal_repair = true
	I1006 14:21:49.273842  649678 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1006 14:21:49.273856  649678 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1006 14:21:49.273870  649678 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1006 14:21:49.273880  649678 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1006 14:21:49.273894  649678 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1006 14:21:49.273901  649678 command_runner.go:130] > [crio.api]
	I1006 14:21:49.273915  649678 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1006 14:21:49.273926  649678 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1006 14:21:49.273935  649678 command_runner.go:130] > # IP address on which the stream server will listen.
	I1006 14:21:49.273947  649678 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1006 14:21:49.273963  649678 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1006 14:21:49.273975  649678 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1006 14:21:49.273987  649678 command_runner.go:130] > # stream_port = "0"
	I1006 14:21:49.274002  649678 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1006 14:21:49.274013  649678 command_runner.go:130] > # stream_enable_tls = false
	I1006 14:21:49.274023  649678 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1006 14:21:49.274035  649678 command_runner.go:130] > # stream_idle_timeout = ""
	I1006 14:21:49.274045  649678 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1006 14:21:49.274059  649678 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1006 14:21:49.274068  649678 command_runner.go:130] > # stream_tls_cert = ""
	I1006 14:21:49.274083  649678 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1006 14:21:49.274109  649678 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1006 14:21:49.274132  649678 command_runner.go:130] > # stream_tls_key = ""
	I1006 14:21:49.274143  649678 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1006 14:21:49.274153  649678 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1006 14:21:49.274162  649678 command_runner.go:130] > # automatically pick up the changes.
	I1006 14:21:49.274173  649678 command_runner.go:130] > # stream_tls_ca = ""
	I1006 14:21:49.274218  649678 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1006 14:21:49.274233  649678 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1006 14:21:49.274245  649678 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1006 14:21:49.274257  649678 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1006 14:21:49.274268  649678 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1006 14:21:49.274281  649678 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1006 14:21:49.274293  649678 command_runner.go:130] > [crio.runtime]
	I1006 14:21:49.274303  649678 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1006 14:21:49.274315  649678 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1006 14:21:49.274325  649678 command_runner.go:130] > # "nofile=1024:2048"
	I1006 14:21:49.274336  649678 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1006 14:21:49.274347  649678 command_runner.go:130] > # default_ulimits = [
	I1006 14:21:49.274353  649678 command_runner.go:130] > # ]
	I1006 14:21:49.274363  649678 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1006 14:21:49.274374  649678 command_runner.go:130] > # no_pivot = false
	I1006 14:21:49.274384  649678 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1006 14:21:49.274399  649678 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1006 14:21:49.274410  649678 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1006 14:21:49.274425  649678 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1006 14:21:49.274437  649678 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1006 14:21:49.274453  649678 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1006 14:21:49.274464  649678 command_runner.go:130] > # conmon = ""
	I1006 14:21:49.274473  649678 command_runner.go:130] > # Cgroup setting for conmon
	I1006 14:21:49.274487  649678 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1006 14:21:49.274498  649678 command_runner.go:130] > conmon_cgroup = "pod"
	I1006 14:21:49.274508  649678 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1006 14:21:49.274520  649678 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1006 14:21:49.274533  649678 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1006 14:21:49.274545  649678 command_runner.go:130] > # conmon_env = [
	I1006 14:21:49.274559  649678 command_runner.go:130] > # ]
	I1006 14:21:49.274566  649678 command_runner.go:130] > # Additional environment variables to set for all the
	I1006 14:21:49.274574  649678 command_runner.go:130] > # containers. These are overridden if set in the
	I1006 14:21:49.274583  649678 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1006 14:21:49.274593  649678 command_runner.go:130] > # default_env = [
	I1006 14:21:49.274599  649678 command_runner.go:130] > # ]
	I1006 14:21:49.274610  649678 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1006 14:21:49.274625  649678 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1006 14:21:49.274633  649678 command_runner.go:130] > # selinux = false
	I1006 14:21:49.274646  649678 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1006 14:21:49.274658  649678 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1006 14:21:49.274677  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274687  649678 command_runner.go:130] > # seccomp_profile = ""
	I1006 14:21:49.274698  649678 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1006 14:21:49.274707  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274715  649678 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1006 14:21:49.274733  649678 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1006 14:21:49.274744  649678 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1006 14:21:49.274754  649678 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1006 14:21:49.274768  649678 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1006 14:21:49.274776  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274784  649678 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1006 14:21:49.274794  649678 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1006 14:21:49.274802  649678 command_runner.go:130] > # the cgroup blockio controller.
	I1006 14:21:49.274809  649678 command_runner.go:130] > # blockio_config_file = ""
	I1006 14:21:49.274820  649678 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1006 14:21:49.274828  649678 command_runner.go:130] > # blockio parameters.
	I1006 14:21:49.274840  649678 command_runner.go:130] > # blockio_reload = false
	I1006 14:21:49.274849  649678 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1006 14:21:49.274856  649678 command_runner.go:130] > # irqbalance daemon.
	I1006 14:21:49.274870  649678 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1006 14:21:49.274886  649678 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1006 14:21:49.274901  649678 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1006 14:21:49.274915  649678 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1006 14:21:49.274927  649678 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1006 14:21:49.274933  649678 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1006 14:21:49.274941  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274945  649678 command_runner.go:130] > # rdt_config_file = ""
	I1006 14:21:49.274950  649678 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1006 14:21:49.274955  649678 command_runner.go:130] > # cgroup_manager = "systemd"
	I1006 14:21:49.274962  649678 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1006 14:21:49.274968  649678 command_runner.go:130] > # separate_pull_cgroup = ""
	I1006 14:21:49.274974  649678 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1006 14:21:49.274982  649678 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1006 14:21:49.274986  649678 command_runner.go:130] > # will be added.
	I1006 14:21:49.274991  649678 command_runner.go:130] > # default_capabilities = [
	I1006 14:21:49.274994  649678 command_runner.go:130] > # 	"CHOWN",
	I1006 14:21:49.274998  649678 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1006 14:21:49.275001  649678 command_runner.go:130] > # 	"FSETID",
	I1006 14:21:49.275004  649678 command_runner.go:130] > # 	"FOWNER",
	I1006 14:21:49.275008  649678 command_runner.go:130] > # 	"SETGID",
	I1006 14:21:49.275026  649678 command_runner.go:130] > # 	"SETUID",
	I1006 14:21:49.275033  649678 command_runner.go:130] > # 	"SETPCAP",
	I1006 14:21:49.275037  649678 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1006 14:21:49.275040  649678 command_runner.go:130] > # 	"KILL",
	I1006 14:21:49.275043  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275051  649678 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1006 14:21:49.275059  649678 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1006 14:21:49.275064  649678 command_runner.go:130] > # add_inheritable_capabilities = false
	I1006 14:21:49.275071  649678 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1006 14:21:49.275077  649678 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1006 14:21:49.275083  649678 command_runner.go:130] > default_sysctls = [
	I1006 14:21:49.275087  649678 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1006 14:21:49.275090  649678 command_runner.go:130] > ]
	I1006 14:21:49.275096  649678 command_runner.go:130] > # List of devices on the host that a
	I1006 14:21:49.275104  649678 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1006 14:21:49.275109  649678 command_runner.go:130] > # allowed_devices = [
	I1006 14:21:49.275122  649678 command_runner.go:130] > # 	"/dev/fuse",
	I1006 14:21:49.275128  649678 command_runner.go:130] > # 	"/dev/net/tun",
	I1006 14:21:49.275132  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275136  649678 command_runner.go:130] > # List of additional devices. specified as
	I1006 14:21:49.275146  649678 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1006 14:21:49.275151  649678 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1006 14:21:49.275156  649678 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1006 14:21:49.275162  649678 command_runner.go:130] > # additional_devices = [
	I1006 14:21:49.275166  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275170  649678 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1006 14:21:49.275176  649678 command_runner.go:130] > # cdi_spec_dirs = [
	I1006 14:21:49.275180  649678 command_runner.go:130] > # 	"/etc/cdi",
	I1006 14:21:49.275184  649678 command_runner.go:130] > # 	"/var/run/cdi",
	I1006 14:21:49.275189  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275195  649678 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1006 14:21:49.275216  649678 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1006 14:21:49.275225  649678 command_runner.go:130] > # Defaults to false.
	I1006 14:21:49.275239  649678 command_runner.go:130] > # device_ownership_from_security_context = false
	I1006 14:21:49.275249  649678 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1006 14:21:49.275255  649678 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1006 14:21:49.275262  649678 command_runner.go:130] > # hooks_dir = [
	I1006 14:21:49.275267  649678 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1006 14:21:49.275273  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275278  649678 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1006 14:21:49.275284  649678 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1006 14:21:49.275292  649678 command_runner.go:130] > # its default mounts from the following two files:
	I1006 14:21:49.275295  649678 command_runner.go:130] > #
	I1006 14:21:49.275300  649678 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1006 14:21:49.275309  649678 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1006 14:21:49.275315  649678 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1006 14:21:49.275328  649678 command_runner.go:130] > #
	I1006 14:21:49.275338  649678 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1006 14:21:49.275345  649678 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1006 14:21:49.275353  649678 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1006 14:21:49.275358  649678 command_runner.go:130] > #      only add mounts it finds in this file.
	I1006 14:21:49.275364  649678 command_runner.go:130] > #
	I1006 14:21:49.275370  649678 command_runner.go:130] > # default_mounts_file = ""
	I1006 14:21:49.275378  649678 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1006 14:21:49.275385  649678 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1006 14:21:49.275391  649678 command_runner.go:130] > # pids_limit = -1
	I1006 14:21:49.275398  649678 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1006 14:21:49.275406  649678 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1006 14:21:49.275412  649678 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1006 14:21:49.275420  649678 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1006 14:21:49.275426  649678 command_runner.go:130] > # log_size_max = -1
	I1006 14:21:49.275433  649678 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1006 14:21:49.275439  649678 command_runner.go:130] > # log_to_journald = false
	I1006 14:21:49.275445  649678 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1006 14:21:49.275452  649678 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1006 14:21:49.275457  649678 command_runner.go:130] > # Path to directory for container attach sockets.
	I1006 14:21:49.275463  649678 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1006 14:21:49.275467  649678 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1006 14:21:49.275474  649678 command_runner.go:130] > # bind_mount_prefix = ""
	I1006 14:21:49.275479  649678 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1006 14:21:49.275485  649678 command_runner.go:130] > # read_only = false
	I1006 14:21:49.275491  649678 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1006 14:21:49.275497  649678 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1006 14:21:49.275504  649678 command_runner.go:130] > # live configuration reload.
	I1006 14:21:49.275508  649678 command_runner.go:130] > # log_level = "info"
	I1006 14:21:49.275513  649678 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1006 14:21:49.275521  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.275525  649678 command_runner.go:130] > # log_filter = ""
	I1006 14:21:49.275530  649678 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1006 14:21:49.275542  649678 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1006 14:21:49.275549  649678 command_runner.go:130] > # separated by comma.
	I1006 14:21:49.275557  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275563  649678 command_runner.go:130] > # uid_mappings = ""
	I1006 14:21:49.275569  649678 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1006 14:21:49.275577  649678 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1006 14:21:49.275585  649678 command_runner.go:130] > # separated by comma.
	I1006 14:21:49.275594  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275598  649678 command_runner.go:130] > # gid_mappings = ""
	I1006 14:21:49.275606  649678 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1006 14:21:49.275614  649678 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1006 14:21:49.275621  649678 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1006 14:21:49.275630  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275634  649678 command_runner.go:130] > # minimum_mappable_uid = -1
	I1006 14:21:49.275640  649678 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1006 14:21:49.275648  649678 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1006 14:21:49.275654  649678 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1006 14:21:49.275664  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275668  649678 command_runner.go:130] > # minimum_mappable_gid = -1
	I1006 14:21:49.275676  649678 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1006 14:21:49.275683  649678 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1006 14:21:49.275690  649678 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1006 14:21:49.275694  649678 command_runner.go:130] > # ctr_stop_timeout = 30
	I1006 14:21:49.275700  649678 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1006 14:21:49.275706  649678 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1006 14:21:49.275711  649678 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1006 14:21:49.275718  649678 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1006 14:21:49.275722  649678 command_runner.go:130] > # drop_infra_ctr = true
	I1006 14:21:49.275731  649678 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1006 14:21:49.275736  649678 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1006 14:21:49.275746  649678 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1006 14:21:49.275752  649678 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1006 14:21:49.275759  649678 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1006 14:21:49.275772  649678 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1006 14:21:49.275778  649678 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1006 14:21:49.275786  649678 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1006 14:21:49.275790  649678 command_runner.go:130] > # shared_cpuset = ""
	I1006 14:21:49.275800  649678 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1006 14:21:49.275805  649678 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1006 14:21:49.275811  649678 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1006 14:21:49.275817  649678 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1006 14:21:49.275824  649678 command_runner.go:130] > # pinns_path = ""
	I1006 14:21:49.275829  649678 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1006 14:21:49.275838  649678 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1006 14:21:49.275842  649678 command_runner.go:130] > # enable_criu_support = true
	I1006 14:21:49.275849  649678 command_runner.go:130] > # Enable/disable the generation of the container,
	I1006 14:21:49.275855  649678 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1006 14:21:49.275859  649678 command_runner.go:130] > # enable_pod_events = false
	I1006 14:21:49.275865  649678 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1006 14:21:49.275872  649678 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1006 14:21:49.275876  649678 command_runner.go:130] > # default_runtime = "crun"
	I1006 14:21:49.275880  649678 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1006 14:21:49.275887  649678 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1006 14:21:49.275898  649678 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1006 14:21:49.275906  649678 command_runner.go:130] > # creation as a file is not desired either.
	I1006 14:21:49.275914  649678 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1006 14:21:49.275921  649678 command_runner.go:130] > # the hostname is being managed dynamically.
	I1006 14:21:49.275925  649678 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1006 14:21:49.275930  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275936  649678 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1006 14:21:49.275945  649678 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1006 14:21:49.275951  649678 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1006 14:21:49.275955  649678 command_runner.go:130] > # Each entry in the table should follow the format:
	I1006 14:21:49.275961  649678 command_runner.go:130] > #
	I1006 14:21:49.275965  649678 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1006 14:21:49.275969  649678 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1006 14:21:49.275980  649678 command_runner.go:130] > # runtime_type = "oci"
	I1006 14:21:49.275988  649678 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1006 14:21:49.275993  649678 command_runner.go:130] > # inherit_default_runtime = false
	I1006 14:21:49.275997  649678 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1006 14:21:49.276002  649678 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1006 14:21:49.276009  649678 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1006 14:21:49.276013  649678 command_runner.go:130] > # monitor_env = []
	I1006 14:21:49.276020  649678 command_runner.go:130] > # privileged_without_host_devices = false
	I1006 14:21:49.276024  649678 command_runner.go:130] > # allowed_annotations = []
	I1006 14:21:49.276029  649678 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1006 14:21:49.276035  649678 command_runner.go:130] > # no_sync_log = false
	I1006 14:21:49.276039  649678 command_runner.go:130] > # default_annotations = {}
	I1006 14:21:49.276044  649678 command_runner.go:130] > # stream_websockets = false
	I1006 14:21:49.276052  649678 command_runner.go:130] > # seccomp_profile = ""
	I1006 14:21:49.276074  649678 command_runner.go:130] > # Where:
	I1006 14:21:49.276087  649678 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1006 14:21:49.276100  649678 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1006 14:21:49.276111  649678 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1006 14:21:49.276124  649678 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1006 14:21:49.276128  649678 command_runner.go:130] > #   in $PATH.
	I1006 14:21:49.276137  649678 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1006 14:21:49.276141  649678 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1006 14:21:49.276149  649678 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1006 14:21:49.276153  649678 command_runner.go:130] > #   state.
	I1006 14:21:49.276159  649678 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1006 14:21:49.276165  649678 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1006 14:21:49.276173  649678 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1006 14:21:49.276179  649678 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1006 14:21:49.276186  649678 command_runner.go:130] > #   the values from the default runtime on load time.
	I1006 14:21:49.276193  649678 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1006 14:21:49.276200  649678 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1006 14:21:49.276242  649678 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1006 14:21:49.276258  649678 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1006 14:21:49.276269  649678 command_runner.go:130] > #   The currently recognized values are:
	I1006 14:21:49.276276  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1006 14:21:49.276286  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1006 14:21:49.276294  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1006 14:21:49.276300  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1006 14:21:49.276308  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1006 14:21:49.276314  649678 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1006 14:21:49.276323  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1006 14:21:49.276330  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1006 14:21:49.276338  649678 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1006 14:21:49.276344  649678 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1006 14:21:49.276353  649678 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1006 14:21:49.276359  649678 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1006 14:21:49.276370  649678 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1006 14:21:49.276380  649678 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1006 14:21:49.276386  649678 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1006 14:21:49.276396  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1006 14:21:49.276402  649678 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1006 14:21:49.276409  649678 command_runner.go:130] > #   deprecated option "conmon".
	I1006 14:21:49.276416  649678 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1006 14:21:49.276423  649678 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1006 14:21:49.276429  649678 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1006 14:21:49.276437  649678 command_runner.go:130] > #   should be moved to the container's cgroup
	I1006 14:21:49.276444  649678 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1006 14:21:49.276451  649678 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1006 14:21:49.276459  649678 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1006 14:21:49.276465  649678 command_runner.go:130] > #   conmon-rs by using:
	I1006 14:21:49.276472  649678 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1006 14:21:49.276481  649678 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1006 14:21:49.276488  649678 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1006 14:21:49.276494  649678 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1006 14:21:49.276502  649678 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1006 14:21:49.276509  649678 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1006 14:21:49.276519  649678 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1006 14:21:49.276524  649678 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1006 14:21:49.276534  649678 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1006 14:21:49.276543  649678 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1006 14:21:49.276551  649678 command_runner.go:130] > #   when a machine crash happens.
	I1006 14:21:49.276558  649678 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1006 14:21:49.276568  649678 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1006 14:21:49.276576  649678 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1006 14:21:49.276583  649678 command_runner.go:130] > #   seccomp profile for the runtime.
	I1006 14:21:49.276589  649678 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1006 14:21:49.276598  649678 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1006 14:21:49.276601  649678 command_runner.go:130] > #
	I1006 14:21:49.276605  649678 command_runner.go:130] > # Using the seccomp notifier feature:
	I1006 14:21:49.276610  649678 command_runner.go:130] > #
	I1006 14:21:49.276617  649678 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1006 14:21:49.276626  649678 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1006 14:21:49.276629  649678 command_runner.go:130] > #
	I1006 14:21:49.276635  649678 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1006 14:21:49.276643  649678 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1006 14:21:49.276646  649678 command_runner.go:130] > #
	I1006 14:21:49.276655  649678 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1006 14:21:49.276664  649678 command_runner.go:130] > # feature.
	I1006 14:21:49.276670  649678 command_runner.go:130] > #
	I1006 14:21:49.276684  649678 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1006 14:21:49.276693  649678 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1006 14:21:49.276700  649678 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1006 14:21:49.276708  649678 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1006 14:21:49.276714  649678 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1006 14:21:49.276720  649678 command_runner.go:130] > #
	I1006 14:21:49.276726  649678 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1006 14:21:49.276734  649678 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1006 14:21:49.276737  649678 command_runner.go:130] > #
	I1006 14:21:49.276745  649678 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1006 14:21:49.276765  649678 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1006 14:21:49.276775  649678 command_runner.go:130] > #
	I1006 14:21:49.276785  649678 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1006 14:21:49.276795  649678 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1006 14:21:49.276798  649678 command_runner.go:130] > # limitation.
	I1006 14:21:49.276802  649678 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1006 14:21:49.276807  649678 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1006 14:21:49.276815  649678 command_runner.go:130] > runtime_type = ""
	I1006 14:21:49.276822  649678 command_runner.go:130] > runtime_root = "/run/crun"
	I1006 14:21:49.276833  649678 command_runner.go:130] > inherit_default_runtime = false
	I1006 14:21:49.276841  649678 command_runner.go:130] > runtime_config_path = ""
	I1006 14:21:49.276851  649678 command_runner.go:130] > container_min_memory = ""
	I1006 14:21:49.276860  649678 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1006 14:21:49.276871  649678 command_runner.go:130] > monitor_cgroup = "pod"
	I1006 14:21:49.276877  649678 command_runner.go:130] > monitor_exec_cgroup = ""
	I1006 14:21:49.276883  649678 command_runner.go:130] > allowed_annotations = [
	I1006 14:21:49.276890  649678 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1006 14:21:49.276896  649678 command_runner.go:130] > ]
	I1006 14:21:49.276902  649678 command_runner.go:130] > privileged_without_host_devices = false
	I1006 14:21:49.276909  649678 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1006 14:21:49.276916  649678 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1006 14:21:49.276922  649678 command_runner.go:130] > runtime_type = ""
	I1006 14:21:49.276929  649678 command_runner.go:130] > runtime_root = "/run/runc"
	I1006 14:21:49.276936  649678 command_runner.go:130] > inherit_default_runtime = false
	I1006 14:21:49.276946  649678 command_runner.go:130] > runtime_config_path = ""
	I1006 14:21:49.276954  649678 command_runner.go:130] > container_min_memory = ""
	I1006 14:21:49.276967  649678 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1006 14:21:49.276978  649678 command_runner.go:130] > monitor_cgroup = "pod"
	I1006 14:21:49.276984  649678 command_runner.go:130] > monitor_exec_cgroup = ""
	I1006 14:21:49.276991  649678 command_runner.go:130] > privileged_without_host_devices = false
	I1006 14:21:49.276998  649678 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1006 14:21:49.277005  649678 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1006 14:21:49.277012  649678 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1006 14:21:49.277036  649678 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1006 14:21:49.277057  649678 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1006 14:21:49.277077  649678 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1006 14:21:49.277093  649678 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1006 14:21:49.277104  649678 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1006 14:21:49.277125  649678 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1006 14:21:49.277141  649678 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1006 14:21:49.277151  649678 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1006 14:21:49.277167  649678 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1006 14:21:49.277177  649678 command_runner.go:130] > # Example:
	I1006 14:21:49.277189  649678 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1006 14:21:49.277201  649678 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1006 14:21:49.277225  649678 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1006 14:21:49.277238  649678 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1006 14:21:49.277249  649678 command_runner.go:130] > # cpuset = "0-1"
	I1006 14:21:49.277260  649678 command_runner.go:130] > # cpushares = "5"
	I1006 14:21:49.277270  649678 command_runner.go:130] > # cpuquota = "1000"
	I1006 14:21:49.277281  649678 command_runner.go:130] > # cpuperiod = "100000"
	I1006 14:21:49.277292  649678 command_runner.go:130] > # cpulimit = "35"
	I1006 14:21:49.277300  649678 command_runner.go:130] > # Where:
	I1006 14:21:49.277307  649678 command_runner.go:130] > # The workload name is workload-type.
	I1006 14:21:49.277323  649678 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1006 14:21:49.277336  649678 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1006 14:21:49.277349  649678 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1006 14:21:49.277366  649678 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1006 14:21:49.277381  649678 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1006 14:21:49.277393  649678 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1006 14:21:49.277406  649678 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1006 14:21:49.277416  649678 command_runner.go:130] > # Default value is set to true
	I1006 14:21:49.277427  649678 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1006 14:21:49.277441  649678 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1006 14:21:49.277453  649678 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1006 14:21:49.277465  649678 command_runner.go:130] > # Default value is set to 'false'
	I1006 14:21:49.277479  649678 command_runner.go:130] > # disable_hostport_mapping = false
	I1006 14:21:49.277492  649678 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1006 14:21:49.277513  649678 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1006 14:21:49.277521  649678 command_runner.go:130] > # timezone = ""
	I1006 14:21:49.277531  649678 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1006 14:21:49.277536  649678 command_runner.go:130] > #
	I1006 14:21:49.277547  649678 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1006 14:21:49.277557  649678 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1006 14:21:49.277565  649678 command_runner.go:130] > [crio.image]
	I1006 14:21:49.277578  649678 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1006 14:21:49.277589  649678 command_runner.go:130] > # default_transport = "docker://"
	I1006 14:21:49.277603  649678 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1006 14:21:49.277617  649678 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1006 14:21:49.277627  649678 command_runner.go:130] > # global_auth_file = ""
	I1006 14:21:49.277652  649678 command_runner.go:130] > # The image used to instantiate infra containers.
	I1006 14:21:49.277665  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.277675  649678 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1006 14:21:49.277690  649678 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1006 14:21:49.277704  649678 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1006 14:21:49.277715  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.277730  649678 command_runner.go:130] > # pause_image_auth_file = ""
	I1006 14:21:49.277741  649678 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1006 14:21:49.277755  649678 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1006 14:21:49.277770  649678 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1006 14:21:49.277785  649678 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1006 14:21:49.277796  649678 command_runner.go:130] > # pause_command = "/pause"
	I1006 14:21:49.277811  649678 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1006 14:21:49.277824  649678 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1006 14:21:49.277838  649678 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1006 14:21:49.277851  649678 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1006 14:21:49.277864  649678 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1006 14:21:49.277879  649678 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1006 14:21:49.277889  649678 command_runner.go:130] > # pinned_images = [
	I1006 14:21:49.277904  649678 command_runner.go:130] > # ]
	I1006 14:21:49.277918  649678 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1006 14:21:49.277929  649678 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1006 14:21:49.277943  649678 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1006 14:21:49.277957  649678 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1006 14:21:49.277969  649678 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1006 14:21:49.277982  649678 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1006 14:21:49.277994  649678 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1006 14:21:49.278013  649678 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1006 14:21:49.278025  649678 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1006 14:21:49.278042  649678 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1006 14:21:49.278056  649678 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1006 14:21:49.278069  649678 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1006 14:21:49.278083  649678 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1006 14:21:49.278099  649678 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1006 14:21:49.278109  649678 command_runner.go:130] > # changing them here.
	I1006 14:21:49.278127  649678 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1006 14:21:49.278138  649678 command_runner.go:130] > # insecure_registries = [
	I1006 14:21:49.278148  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278163  649678 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1006 14:21:49.278181  649678 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1006 14:21:49.278192  649678 command_runner.go:130] > # image_volumes = "mkdir"
	I1006 14:21:49.278214  649678 command_runner.go:130] > # Temporary directory to use for storing big files
	I1006 14:21:49.278227  649678 command_runner.go:130] > # big_files_temporary_dir = ""
	I1006 14:21:49.278237  649678 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1006 14:21:49.278253  649678 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1006 14:21:49.278265  649678 command_runner.go:130] > # auto_reload_registries = false
	I1006 14:21:49.278278  649678 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1006 14:21:49.278294  649678 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1006 14:21:49.278307  649678 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1006 14:21:49.278317  649678 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1006 14:21:49.278329  649678 command_runner.go:130] > # The mode of short name resolution.
	I1006 14:21:49.278343  649678 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1006 14:21:49.278364  649678 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1006 14:21:49.278377  649678 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1006 14:21:49.278389  649678 command_runner.go:130] > # short_name_mode = "enforcing"
	I1006 14:21:49.278403  649678 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1006 14:21:49.278414  649678 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1006 14:21:49.278425  649678 command_runner.go:130] > # oci_artifact_mount_support = true
	I1006 14:21:49.278440  649678 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1006 14:21:49.278450  649678 command_runner.go:130] > # CNI plugins.
	I1006 14:21:49.278460  649678 command_runner.go:130] > [crio.network]
	I1006 14:21:49.278474  649678 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1006 14:21:49.278486  649678 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1006 14:21:49.278497  649678 command_runner.go:130] > # cni_default_network = ""
	I1006 14:21:49.278508  649678 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1006 14:21:49.278519  649678 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1006 14:21:49.278532  649678 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1006 14:21:49.278543  649678 command_runner.go:130] > # plugin_dirs = [
	I1006 14:21:49.278554  649678 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1006 14:21:49.278563  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278574  649678 command_runner.go:130] > # List of included pod metrics.
	I1006 14:21:49.278586  649678 command_runner.go:130] > # included_pod_metrics = [
	I1006 14:21:49.278594  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278605  649678 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1006 14:21:49.278615  649678 command_runner.go:130] > [crio.metrics]
	I1006 14:21:49.278627  649678 command_runner.go:130] > # Globally enable or disable metrics support.
	I1006 14:21:49.278639  649678 command_runner.go:130] > # enable_metrics = false
	I1006 14:21:49.278651  649678 command_runner.go:130] > # Specify enabled metrics collectors.
	I1006 14:21:49.278662  649678 command_runner.go:130] > # Per default all metrics are enabled.
	I1006 14:21:49.278676  649678 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1006 14:21:49.278689  649678 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1006 14:21:49.278700  649678 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1006 14:21:49.278712  649678 command_runner.go:130] > # metrics_collectors = [
	I1006 14:21:49.278718  649678 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1006 14:21:49.278727  649678 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1006 14:21:49.278740  649678 command_runner.go:130] > # 	"containers_oom_total",
	I1006 14:21:49.278747  649678 command_runner.go:130] > # 	"processes_defunct",
	I1006 14:21:49.278754  649678 command_runner.go:130] > # 	"operations_total",
	I1006 14:21:49.278761  649678 command_runner.go:130] > # 	"operations_latency_seconds",
	I1006 14:21:49.278769  649678 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1006 14:21:49.278776  649678 command_runner.go:130] > # 	"operations_errors_total",
	I1006 14:21:49.278786  649678 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1006 14:21:49.278798  649678 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1006 14:21:49.278810  649678 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1006 14:21:49.278822  649678 command_runner.go:130] > # 	"image_pulls_success_total",
	I1006 14:21:49.278833  649678 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1006 14:21:49.278844  649678 command_runner.go:130] > # 	"containers_oom_count_total",
	I1006 14:21:49.278856  649678 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1006 14:21:49.278867  649678 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1006 14:21:49.278878  649678 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1006 14:21:49.278886  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278896  649678 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1006 14:21:49.278907  649678 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1006 14:21:49.278916  649678 command_runner.go:130] > # The port on which the metrics server will listen.
	I1006 14:21:49.278927  649678 command_runner.go:130] > # metrics_port = 9090
	I1006 14:21:49.278939  649678 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1006 14:21:49.278950  649678 command_runner.go:130] > # metrics_socket = ""
	I1006 14:21:49.278962  649678 command_runner.go:130] > # The certificate for the secure metrics server.
	I1006 14:21:49.278975  649678 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1006 14:21:49.278986  649678 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1006 14:21:49.278998  649678 command_runner.go:130] > # certificate on any modification event.
	I1006 14:21:49.279009  649678 command_runner.go:130] > # metrics_cert = ""
	I1006 14:21:49.279018  649678 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1006 14:21:49.279031  649678 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1006 14:21:49.279042  649678 command_runner.go:130] > # metrics_key = ""
	I1006 14:21:49.279054  649678 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1006 14:21:49.279065  649678 command_runner.go:130] > [crio.tracing]
	I1006 14:21:49.279078  649678 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1006 14:21:49.279088  649678 command_runner.go:130] > # enable_tracing = false
	I1006 14:21:49.279100  649678 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1006 14:21:49.279118  649678 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1006 14:21:49.279133  649678 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1006 14:21:49.279145  649678 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1006 14:21:49.279155  649678 command_runner.go:130] > # CRI-O NRI configuration.
	I1006 14:21:49.279165  649678 command_runner.go:130] > [crio.nri]
	I1006 14:21:49.279176  649678 command_runner.go:130] > # Globally enable or disable NRI.
	I1006 14:21:49.279185  649678 command_runner.go:130] > # enable_nri = true
	I1006 14:21:49.279195  649678 command_runner.go:130] > # NRI socket to listen on.
	I1006 14:21:49.279220  649678 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1006 14:21:49.279232  649678 command_runner.go:130] > # NRI plugin directory to use.
	I1006 14:21:49.279239  649678 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1006 14:21:49.279251  649678 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1006 14:21:49.279263  649678 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1006 14:21:49.279276  649678 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1006 14:21:49.279348  649678 command_runner.go:130] > # nri_disable_connections = false
	I1006 14:21:49.279363  649678 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1006 14:21:49.279371  649678 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1006 14:21:49.279381  649678 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1006 14:21:49.279393  649678 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1006 14:21:49.279404  649678 command_runner.go:130] > # NRI default validator configuration.
	I1006 14:21:49.279420  649678 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1006 14:21:49.279434  649678 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1006 14:21:49.279445  649678 command_runner.go:130] > # can be restricted/rejected:
	I1006 14:21:49.279455  649678 command_runner.go:130] > # - OCI hook injection
	I1006 14:21:49.279467  649678 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1006 14:21:49.279479  649678 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1006 14:21:49.279488  649678 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1006 14:21:49.279499  649678 command_runner.go:130] > # - adjustment of linux namespaces
	I1006 14:21:49.279513  649678 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1006 14:21:49.279528  649678 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1006 14:21:49.279541  649678 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1006 14:21:49.279550  649678 command_runner.go:130] > #
	I1006 14:21:49.279561  649678 command_runner.go:130] > # [crio.nri.default_validator]
	I1006 14:21:49.279574  649678 command_runner.go:130] > # nri_enable_default_validator = false
	I1006 14:21:49.279587  649678 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1006 14:21:49.279600  649678 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1006 14:21:49.279613  649678 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1006 14:21:49.279626  649678 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1006 14:21:49.279636  649678 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1006 14:21:49.279646  649678 command_runner.go:130] > # nri_validator_required_plugins = [
	I1006 14:21:49.279656  649678 command_runner.go:130] > # ]
	I1006 14:21:49.279668  649678 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1006 14:21:49.279681  649678 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1006 14:21:49.279691  649678 command_runner.go:130] > [crio.stats]
	I1006 14:21:49.279704  649678 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1006 14:21:49.279717  649678 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1006 14:21:49.279728  649678 command_runner.go:130] > # stats_collection_period = 0
	I1006 14:21:49.279739  649678 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1006 14:21:49.279753  649678 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1006 14:21:49.279764  649678 command_runner.go:130] > # collection_period = 0
	I1006 14:21:49.279811  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258239123Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1006 14:21:49.279828  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258265766Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1006 14:21:49.279842  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258283938Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1006 14:21:49.279857  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.25830256Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1006 14:21:49.279875  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258357499Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:49.279892  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258517334Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1006 14:21:49.279912  649678 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1006 14:21:49.280045  649678 cni.go:84] Creating CNI manager for ""
	I1006 14:21:49.280059  649678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:21:49.280078  649678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:21:49.280122  649678 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-135520 NodeName:functional-135520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:21:49.280303  649678 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-135520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:21:49.280384  649678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:21:49.288800  649678 command_runner.go:130] > kubeadm
	I1006 14:21:49.288826  649678 command_runner.go:130] > kubectl
	I1006 14:21:49.288833  649678 command_runner.go:130] > kubelet
	I1006 14:21:49.288864  649678 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:21:49.288912  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:21:49.296476  649678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 14:21:49.308883  649678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:21:49.321172  649678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1006 14:21:49.333376  649678 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:21:49.336963  649678 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1006 14:21:49.337019  649678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:49.424422  649678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:21:49.437476  649678 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520 for IP: 192.168.49.2
	I1006 14:21:49.437505  649678 certs.go:195] generating shared ca certs ...
	I1006 14:21:49.437527  649678 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:49.437678  649678 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:21:49.437730  649678 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:21:49.437748  649678 certs.go:257] generating profile certs ...
	I1006 14:21:49.437847  649678 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key
	I1006 14:21:49.437896  649678 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key.72a46e8e
	I1006 14:21:49.437936  649678 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key
	I1006 14:21:49.437949  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:21:49.437963  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:21:49.437984  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:21:49.438003  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:21:49.438018  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:21:49.438035  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:21:49.438049  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:21:49.438064  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:21:49.438123  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:21:49.438160  649678 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:21:49.438171  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:21:49.438196  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:21:49.438246  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:21:49.438271  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:21:49.438316  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:21:49.438344  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.438359  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.438381  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.439032  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:21:49.456437  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:21:49.473578  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:21:49.490593  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:21:49.508347  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:21:49.525339  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:21:49.541997  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:21:49.558467  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:21:49.576359  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:21:49.593578  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:21:49.610863  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:21:49.628123  649678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:21:49.640270  649678 ssh_runner.go:195] Run: openssl version
	I1006 14:21:49.646279  649678 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1006 14:21:49.646391  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:21:49.654553  649678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.658110  649678 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.658254  649678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.658303  649678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.692318  649678 command_runner.go:130] > 3ec20f2e
	I1006 14:21:49.692406  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:21:49.700814  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:21:49.709140  649678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.712721  649678 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.712738  649678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.712772  649678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.745663  649678 command_runner.go:130] > b5213941
	I1006 14:21:49.745998  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:21:49.754083  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:21:49.762664  649678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.766415  649678 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.766461  649678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.766502  649678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.800644  649678 command_runner.go:130] > 51391683
	I1006 14:21:49.800985  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:21:49.809049  649678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:21:49.812721  649678 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:21:49.812776  649678 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1006 14:21:49.812784  649678 command_runner.go:130] > Device: 8,1	Inode: 580300      Links: 1
	I1006 14:21:49.812793  649678 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 14:21:49.812800  649678 command_runner.go:130] > Access: 2025-10-06 14:17:42.533320203 +0000
	I1006 14:21:49.812811  649678 command_runner.go:130] > Modify: 2025-10-06 14:13:37.457627952 +0000
	I1006 14:21:49.812819  649678 command_runner.go:130] > Change: 2025-10-06 14:13:37.457627952 +0000
	I1006 14:21:49.812829  649678 command_runner.go:130] >  Birth: 2025-10-06 14:13:37.457627952 +0000
	I1006 14:21:49.812886  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:21:49.846896  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.847277  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:21:49.881096  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.881431  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:21:49.916333  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.916837  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:21:49.951128  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.951323  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:21:49.984919  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.985255  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:21:50.018710  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:50.018987  649678 kubeadm.go:400] StartCluster: {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:50.019061  649678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:21:50.019118  649678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:21:50.047552  649678 cri.go:89] found id: ""
	I1006 14:21:50.047624  649678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:21:50.055103  649678 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1006 14:21:50.055125  649678 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1006 14:21:50.055137  649678 command_runner.go:130] > /var/lib/minikube/etcd:
	I1006 14:21:50.055780  649678 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:21:50.055795  649678 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:21:50.055835  649678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:21:50.063106  649678 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:21:50.063218  649678 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-135520" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.063263  649678 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-626179/kubeconfig needs updating (will repair): [kubeconfig missing "functional-135520" cluster setting kubeconfig missing "functional-135520" context setting]
	I1006 14:21:50.063581  649678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:50.064282  649678 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.064435  649678 kapi.go:59] client config for functional-135520: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:21:50.064874  649678 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 14:21:50.064894  649678 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 14:21:50.064898  649678 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 14:21:50.064902  649678 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 14:21:50.064906  649678 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 14:21:50.064950  649678 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1006 14:21:50.065393  649678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:21:50.072886  649678 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1006 14:21:50.072922  649678 kubeadm.go:601] duration metric: took 17.120794ms to restartPrimaryControlPlane
	I1006 14:21:50.072932  649678 kubeadm.go:402] duration metric: took 53.951913ms to StartCluster
	I1006 14:21:50.072948  649678 settings.go:142] acquiring lock: {Name:mk49b10f71f24d1f54d5c453b3b04e717e9a9100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:50.073763  649678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.074346  649678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:50.074579  649678 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:21:50.074661  649678 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 14:21:50.074799  649678 addons.go:69] Setting storage-provisioner=true in profile "functional-135520"
	I1006 14:21:50.074825  649678 addons.go:238] Setting addon storage-provisioner=true in "functional-135520"
	I1006 14:21:50.074761  649678 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:21:50.074866  649678 addons.go:69] Setting default-storageclass=true in profile "functional-135520"
	I1006 14:21:50.074859  649678 host.go:66] Checking if "functional-135520" exists ...
	I1006 14:21:50.074881  649678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-135520"
	I1006 14:21:50.075174  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:50.075488  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:50.077233  649678 out.go:179] * Verifying Kubernetes components...
	I1006 14:21:50.078370  649678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:50.095495  649678 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.095656  649678 kapi.go:59] client config for functional-135520: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:21:50.095938  649678 addons.go:238] Setting addon default-storageclass=true in "functional-135520"
	I1006 14:21:50.095974  649678 host.go:66] Checking if "functional-135520" exists ...
	I1006 14:21:50.096327  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:50.100068  649678 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:21:50.101767  649678 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.101786  649678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:21:50.101831  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:50.122986  649678 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.123017  649678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:21:50.123083  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:50.128190  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:50.141305  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:50.171892  649678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:21:50.185683  649678 node_ready.go:35] waiting up to 6m0s for node "functional-135520" to be "Ready" ...
	I1006 14:21:50.185842  649678 type.go:168] "Request Body" body=""
	I1006 14:21:50.185906  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:50.186211  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:50.238569  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.250369  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.297302  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.297371  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.297421  649678 retry.go:31] will retry after 341.445316ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.306094  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.306137  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.306156  649678 retry.go:31] will retry after 289.440052ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.596773  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.639555  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.652478  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.652547  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.652572  649678 retry.go:31] will retry after 276.474886ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.686728  649678 type.go:168] "Request Body" body=""
	I1006 14:21:50.686820  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:50.687192  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:50.696244  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.696297  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.696320  649678 retry.go:31] will retry after 208.115159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.904724  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.929427  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.961651  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.961718  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.961741  649678 retry.go:31] will retry after 526.763649ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.984274  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.988765  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.988799  649678 retry.go:31] will retry after 299.40846ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.186119  649678 type.go:168] "Request Body" body=""
	I1006 14:21:51.186232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:51.186600  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:51.288897  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:51.344296  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:51.344362  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.344390  649678 retry.go:31] will retry after 1.255489073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.489635  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:51.542509  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:51.545518  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.545558  649678 retry.go:31] will retry after 1.109395122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.686960  649678 type.go:168] "Request Body" body=""
	I1006 14:21:51.687044  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:51.687429  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:52.186098  649678 type.go:168] "Request Body" body=""
	I1006 14:21:52.186177  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:52.186579  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:52.186647  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:52.600133  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:52.654438  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:52.654496  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.654515  649678 retry.go:31] will retry after 1.609702337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.655551  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:52.686897  649678 type.go:168] "Request Body" body=""
	I1006 14:21:52.686998  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:52.687382  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:52.709517  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:52.709578  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.709602  649678 retry.go:31] will retry after 1.712984533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:53.186162  649678 type.go:168] "Request Body" body=""
	I1006 14:21:53.186283  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:53.186685  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:53.686305  649678 type.go:168] "Request Body" body=""
	I1006 14:21:53.686410  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:53.686778  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:54.186389  649678 type.go:168] "Request Body" body=""
	I1006 14:21:54.186497  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:54.186895  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:54.186974  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:54.265161  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:54.320415  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:54.320465  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.320484  649678 retry.go:31] will retry after 1.901708606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.423753  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:54.478522  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:54.478584  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.478619  649678 retry.go:31] will retry after 1.584586857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.685879  649678 type.go:168] "Request Body" body=""
	I1006 14:21:54.685954  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:54.686309  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:55.185880  649678 type.go:168] "Request Body" body=""
	I1006 14:21:55.185961  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:55.186309  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:55.685969  649678 type.go:168] "Request Body" body=""
	I1006 14:21:55.686071  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:55.686478  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:56.063981  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:56.118717  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:56.118774  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.118807  649678 retry.go:31] will retry after 2.733091815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.185931  649678 type.go:168] "Request Body" body=""
	I1006 14:21:56.186008  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:56.186344  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:56.222525  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:56.276120  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:56.276196  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.276235  649678 retry.go:31] will retry after 1.816128137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.686920  649678 type.go:168] "Request Body" body=""
	I1006 14:21:56.687009  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:56.687408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:56.687471  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:57.186225  649678 type.go:168] "Request Body" body=""
	I1006 14:21:57.186314  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:57.186655  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:57.686516  649678 type.go:168] "Request Body" body=""
	I1006 14:21:57.686601  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:57.686915  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:58.093526  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:58.148989  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:58.149041  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.149066  649678 retry.go:31] will retry after 2.492749577s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.186253  649678 type.go:168] "Request Body" body=""
	I1006 14:21:58.186345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:58.186702  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:58.686540  649678 type.go:168] "Request Body" body=""
	I1006 14:21:58.686625  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:58.686963  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:58.852333  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:58.907770  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:58.907811  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.907831  649678 retry.go:31] will retry after 3.408188619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:59.186242  649678 type.go:168] "Request Body" body=""
	I1006 14:21:59.186325  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:59.186705  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:59.186784  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:59.686631  649678 type.go:168] "Request Body" body=""
	I1006 14:21:59.686729  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:59.687112  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:00.185903  649678 type.go:168] "Request Body" body=""
	I1006 14:22:00.185998  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:00.186365  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:00.642984  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:00.686799  649678 type.go:168] "Request Body" body=""
	I1006 14:22:00.686880  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:00.687243  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:00.698375  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:00.698427  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:00.698448  649678 retry.go:31] will retry after 6.594317937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:01.186036  649678 type.go:168] "Request Body" body=""
	I1006 14:22:01.186143  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:01.186563  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:01.686476  649678 type.go:168] "Request Body" body=""
	I1006 14:22:01.686584  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:01.686981  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:01.687058  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:02.186608  649678 type.go:168] "Request Body" body=""
	I1006 14:22:02.186705  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:02.187061  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:02.316279  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:02.370200  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:02.373358  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:02.373390  649678 retry.go:31] will retry after 5.569612861s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:02.686858  649678 type.go:168] "Request Body" body=""
	I1006 14:22:02.686947  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:02.687350  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:03.185954  649678 type.go:168] "Request Body" body=""
	I1006 14:22:03.186035  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:03.186451  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:03.686069  649678 type.go:168] "Request Body" body=""
	I1006 14:22:03.686185  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:03.686679  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:04.186146  649678 type.go:168] "Request Body" body=""
	I1006 14:22:04.186265  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:04.186682  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:04.186759  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:04.686312  649678 type.go:168] "Request Body" body=""
	I1006 14:22:04.686448  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:04.686778  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:05.186355  649678 type.go:168] "Request Body" body=""
	I1006 14:22:05.186442  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:05.186804  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:05.686470  649678 type.go:168] "Request Body" body=""
	I1006 14:22:05.686548  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:05.686892  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:06.186409  649678 type.go:168] "Request Body" body=""
	I1006 14:22:06.186493  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:06.186841  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:06.186906  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:06.686653  649678 type.go:168] "Request Body" body=""
	I1006 14:22:06.686731  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:06.687077  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:07.186430  649678 type.go:168] "Request Body" body=""
	I1006 14:22:07.186515  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:07.186850  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:07.293062  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:07.347879  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:07.347938  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:07.347958  649678 retry.go:31] will retry after 11.599769479s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:07.686422  649678 type.go:168] "Request Body" body=""
	I1006 14:22:07.686519  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:07.686919  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:07.943325  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:07.994639  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:07.997627  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:07.997659  649678 retry.go:31] will retry after 6.982471195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:08.186017  649678 type.go:168] "Request Body" body=""
	I1006 14:22:08.186095  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:08.186523  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:08.686113  649678 type.go:168] "Request Body" body=""
	I1006 14:22:08.686234  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:08.686617  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:08.686693  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:09.186236  649678 type.go:168] "Request Body" body=""
	I1006 14:22:09.186345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:09.186717  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:09.686283  649678 type.go:168] "Request Body" body=""
	I1006 14:22:09.686365  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:09.686759  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:10.186558  649678 type.go:168] "Request Body" body=""
	I1006 14:22:10.186657  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:10.187046  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:10.686665  649678 type.go:168] "Request Body" body=""
	I1006 14:22:10.686743  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:10.687116  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:10.687244  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:11.186799  649678 type.go:168] "Request Body" body=""
	I1006 14:22:11.186892  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:11.187296  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:11.686074  649678 type.go:168] "Request Body" body=""
	I1006 14:22:11.686224  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:11.686586  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:12.186151  649678 type.go:168] "Request Body" body=""
	I1006 14:22:12.186305  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:12.186696  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:12.686260  649678 type.go:168] "Request Body" body=""
	I1006 14:22:12.686345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:12.686706  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:13.186307  649678 type.go:168] "Request Body" body=""
	I1006 14:22:13.186418  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:13.186788  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:13.186857  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:13.686381  649678 type.go:168] "Request Body" body=""
	I1006 14:22:13.686488  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:13.686854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:14.186497  649678 type.go:168] "Request Body" body=""
	I1006 14:22:14.186592  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:14.186941  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:14.686598  649678 type.go:168] "Request Body" body=""
	I1006 14:22:14.686682  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:14.687029  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:14.980397  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:15.034191  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:15.034263  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:15.034288  649678 retry.go:31] will retry after 12.004605903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:15.186550  649678 type.go:168] "Request Body" body=""
	I1006 14:22:15.186633  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:15.187020  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:15.187102  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:15.686717  649678 type.go:168] "Request Body" body=""
	I1006 14:22:15.686812  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:15.687196  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:16.186809  649678 type.go:168] "Request Body" body=""
	I1006 14:22:16.186884  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:16.187256  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:16.686013  649678 type.go:168] "Request Body" body=""
	I1006 14:22:16.686098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:16.686488  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:17.186068  649678 type.go:168] "Request Body" body=""
	I1006 14:22:17.186146  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:17.186573  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:17.686133  649678 type.go:168] "Request Body" body=""
	I1006 14:22:17.686253  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:17.686622  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:17.686699  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:18.186192  649678 type.go:168] "Request Body" body=""
	I1006 14:22:18.186295  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:18.186693  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:18.686281  649678 type.go:168] "Request Body" body=""
	I1006 14:22:18.686358  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:18.686685  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:18.948057  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:19.002723  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:19.002770  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:19.002791  649678 retry.go:31] will retry after 9.663618433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:19.186105  649678 type.go:168] "Request Body" body=""
	I1006 14:22:19.186250  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:19.186659  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:19.686518  649678 type.go:168] "Request Body" body=""
	I1006 14:22:19.686605  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:19.686939  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:19.687009  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:20.186860  649678 type.go:168] "Request Body" body=""
	I1006 14:22:20.186965  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:20.187367  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:20.686167  649678 type.go:168] "Request Body" body=""
	I1006 14:22:20.686275  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:20.686635  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:21.186460  649678 type.go:168] "Request Body" body=""
	I1006 14:22:21.186548  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:21.186942  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:21.686821  649678 type.go:168] "Request Body" body=""
	I1006 14:22:21.686902  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:21.687332  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:21.687397  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:22.186083  649678 type.go:168] "Request Body" body=""
	I1006 14:22:22.186166  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:22.186569  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:22.686397  649678 type.go:168] "Request Body" body=""
	I1006 14:22:22.686491  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:22.686903  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:23.186781  649678 type.go:168] "Request Body" body=""
	I1006 14:22:23.186870  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:23.187268  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:23.686042  649678 type.go:168] "Request Body" body=""
	I1006 14:22:23.686129  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:23.686575  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:24.186356  649678 type.go:168] "Request Body" body=""
	I1006 14:22:24.186489  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:24.186921  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:24.187013  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:24.686802  649678 type.go:168] "Request Body" body=""
	I1006 14:22:24.686904  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:24.687313  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:25.186100  649678 type.go:168] "Request Body" body=""
	I1006 14:22:25.186254  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:25.186644  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:25.686394  649678 type.go:168] "Request Body" body=""
	I1006 14:22:25.686478  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:25.686854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:26.186709  649678 type.go:168] "Request Body" body=""
	I1006 14:22:26.186843  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:26.187291  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:26.187357  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:26.686108  649678 type.go:168] "Request Body" body=""
	I1006 14:22:26.686232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:26.686608  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:27.039059  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:27.094007  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:27.097496  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:27.097534  649678 retry.go:31] will retry after 22.614868096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:27.186847  649678 type.go:168] "Request Body" body=""
	I1006 14:22:27.186925  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:27.187319  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:27.686152  649678 type.go:168] "Request Body" body=""
	I1006 14:22:27.686302  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:27.686651  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:28.186562  649678 type.go:168] "Request Body" body=""
	I1006 14:22:28.186655  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:28.187109  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:28.666677  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:28.686315  649678 type.go:168] "Request Body" body=""
	I1006 14:22:28.686424  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:28.686765  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:28.686846  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:28.722750  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:28.722794  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:28.722814  649678 retry.go:31] will retry after 11.553901016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:29.186360  649678 type.go:168] "Request Body" body=""
	I1006 14:22:29.186463  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:29.186854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:29.686594  649678 type.go:168] "Request Body" body=""
	I1006 14:22:29.686674  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:29.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:30.186847  649678 type.go:168] "Request Body" body=""
	I1006 14:22:30.186978  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:30.187394  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:30.685980  649678 type.go:168] "Request Body" body=""
	I1006 14:22:30.686063  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:30.686514  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:31.186103  649678 type.go:168] "Request Body" body=""
	I1006 14:22:31.186273  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:31.186671  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:31.186735  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:31.686585  649678 type.go:168] "Request Body" body=""
	I1006 14:22:31.686699  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:31.687091  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:32.186757  649678 type.go:168] "Request Body" body=""
	I1006 14:22:32.186864  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:32.187311  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:32.685887  649678 type.go:168] "Request Body" body=""
	I1006 14:22:32.685973  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:32.686388  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:33.186057  649678 type.go:168] "Request Body" body=""
	I1006 14:22:33.186156  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:33.186557  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:33.686144  649678 type.go:168] "Request Body" body=""
	I1006 14:22:33.686262  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:33.686648  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:33.686721  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:34.186259  649678 type.go:168] "Request Body" body=""
	I1006 14:22:34.186354  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:34.186737  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:34.686419  649678 type.go:168] "Request Body" body=""
	I1006 14:22:34.686498  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:34.686871  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:35.186497  649678 type.go:168] "Request Body" body=""
	I1006 14:22:35.186603  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:35.186980  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:35.686662  649678 type.go:168] "Request Body" body=""
	I1006 14:22:35.686763  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:35.687122  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:35.687197  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:36.186754  649678 type.go:168] "Request Body" body=""
	I1006 14:22:36.186848  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:36.187316  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:36.686164  649678 type.go:168] "Request Body" body=""
	I1006 14:22:36.686314  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:36.686722  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:37.186321  649678 type.go:168] "Request Body" body=""
	I1006 14:22:37.186420  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:37.186775  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:37.686633  649678 type.go:168] "Request Body" body=""
	I1006 14:22:37.686715  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:37.687101  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:38.185900  649678 type.go:168] "Request Body" body=""
	I1006 14:22:38.185994  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:38.186391  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:38.186465  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:38.686198  649678 type.go:168] "Request Body" body=""
	I1006 14:22:38.686309  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:38.686708  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:39.186526  649678 type.go:168] "Request Body" body=""
	I1006 14:22:39.186655  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:39.187049  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:39.685917  649678 type.go:168] "Request Body" body=""
	I1006 14:22:39.686005  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:39.686446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:40.186230  649678 type.go:168] "Request Body" body=""
	I1006 14:22:40.186337  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:40.186733  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:40.186801  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:40.276916  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:40.331801  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:40.335179  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:40.335232  649678 retry.go:31] will retry after 39.41387573s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:40.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:22:40.686899  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:40.687303  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:41.186091  649678 type.go:168] "Request Body" body=""
	I1006 14:22:41.186200  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:41.186603  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:41.686526  649678 type.go:168] "Request Body" body=""
	I1006 14:22:41.686626  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:41.687010  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:42.186887  649678 type.go:168] "Request Body" body=""
	I1006 14:22:42.186964  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:42.187345  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:42.187421  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:42.686150  649678 type.go:168] "Request Body" body=""
	I1006 14:22:42.686267  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:42.686658  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:43.186527  649678 type.go:168] "Request Body" body=""
	I1006 14:22:43.186614  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:43.186999  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:43.686820  649678 type.go:168] "Request Body" body=""
	I1006 14:22:43.686909  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:43.687318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:44.186096  649678 type.go:168] "Request Body" body=""
	I1006 14:22:44.186247  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:44.186640  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:44.686530  649678 type.go:168] "Request Body" body=""
	I1006 14:22:44.686615  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:44.687010  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:44.687087  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:45.186889  649678 type.go:168] "Request Body" body=""
	I1006 14:22:45.186975  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:45.187340  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:45.686094  649678 type.go:168] "Request Body" body=""
	I1006 14:22:45.686177  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:45.686579  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:46.186357  649678 type.go:168] "Request Body" body=""
	I1006 14:22:46.186468  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:46.186826  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:46.686734  649678 type.go:168] "Request Body" body=""
	I1006 14:22:46.686824  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:46.687252  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:46.687331  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:47.186069  649678 type.go:168] "Request Body" body=""
	I1006 14:22:47.186155  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:47.186586  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:47.686023  649678 type.go:168] "Request Body" body=""
	I1006 14:22:47.686126  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:47.686582  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:48.186406  649678 type.go:168] "Request Body" body=""
	I1006 14:22:48.186501  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:48.186908  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:48.686766  649678 type.go:168] "Request Body" body=""
	I1006 14:22:48.686850  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:48.687229  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:49.186033  649678 type.go:168] "Request Body" body=""
	I1006 14:22:49.186123  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:49.186550  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:49.186623  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:49.686385  649678 type.go:168] "Request Body" body=""
	I1006 14:22:49.686504  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:49.686900  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:49.713160  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:49.766183  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:49.769572  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:49.769611  649678 retry.go:31] will retry after 48.442133458s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:50.186500  649678 type.go:168] "Request Body" body=""
	I1006 14:22:50.186594  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:50.186974  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:50.686617  649678 type.go:168] "Request Body" body=""
	I1006 14:22:50.686714  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:50.687119  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:51.186841  649678 type.go:168] "Request Body" body=""
	I1006 14:22:51.186935  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:51.187337  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:51.187405  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:51.686028  649678 type.go:168] "Request Body" body=""
	I1006 14:22:51.686127  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:51.686519  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:52.186126  649678 type.go:168] "Request Body" body=""
	I1006 14:22:52.186243  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:52.186633  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:52.686285  649678 type.go:168] "Request Body" body=""
	I1006 14:22:52.686514  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:52.686906  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:53.186666  649678 type.go:168] "Request Body" body=""
	I1006 14:22:53.186777  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:53.187137  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:53.686806  649678 type.go:168] "Request Body" body=""
	I1006 14:22:53.686890  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:53.687265  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:53.687341  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:54.186883  649678 type.go:168] "Request Body" body=""
	I1006 14:22:54.186965  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:54.187357  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:54.685948  649678 type.go:168] "Request Body" body=""
	I1006 14:22:54.686032  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:54.686415  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:55.186002  649678 type.go:168] "Request Body" body=""
	I1006 14:22:55.186183  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:55.186574  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:55.686131  649678 type.go:168] "Request Body" body=""
	I1006 14:22:55.686232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:55.686601  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:56.186156  649678 type.go:168] "Request Body" body=""
	I1006 14:22:56.186256  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:56.186593  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:56.186664  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:56.686450  649678 type.go:168] "Request Body" body=""
	I1006 14:22:56.686613  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:56.686999  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:57.186661  649678 type.go:168] "Request Body" body=""
	I1006 14:22:57.186772  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:57.187148  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:57.686783  649678 type.go:168] "Request Body" body=""
	I1006 14:22:57.686883  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:57.687277  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:58.185869  649678 type.go:168] "Request Body" body=""
	I1006 14:22:58.185950  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:58.186323  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:58.686036  649678 type.go:168] "Request Body" body=""
	I1006 14:22:58.686125  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:58.686521  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:58.686591  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:59.186311  649678 type.go:168] "Request Body" body=""
	I1006 14:22:59.186404  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:59.186765  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:59.686602  649678 type.go:168] "Request Body" body=""
	I1006 14:22:59.686706  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:59.687089  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:00.186937  649678 type.go:168] "Request Body" body=""
	I1006 14:23:00.187019  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:00.187408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:00.686157  649678 type.go:168] "Request Body" body=""
	I1006 14:23:00.686313  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:00.686729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:00.686803  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:01.186684  649678 type.go:168] "Request Body" body=""
	I1006 14:23:01.186778  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:01.187151  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:01.685976  649678 type.go:168] "Request Body" body=""
	I1006 14:23:01.686057  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:01.686478  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:02.186289  649678 type.go:168] "Request Body" body=""
	I1006 14:23:02.186377  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:02.186737  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:02.686603  649678 type.go:168] "Request Body" body=""
	I1006 14:23:02.686684  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:02.687079  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:02.687190  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:03.185994  649678 type.go:168] "Request Body" body=""
	I1006 14:23:03.186088  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:03.186549  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:03.686032  649678 type.go:168] "Request Body" body=""
	I1006 14:23:03.686132  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:03.686549  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:04.186500  649678 type.go:168] "Request Body" body=""
	I1006 14:23:04.186631  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:04.187174  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:04.685994  649678 type.go:168] "Request Body" body=""
	I1006 14:23:04.686082  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:04.686484  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:05.186312  649678 type.go:168] "Request Body" body=""
	I1006 14:23:05.186407  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:05.186774  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:05.186835  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:05.686657  649678 type.go:168] "Request Body" body=""
	I1006 14:23:05.686791  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:05.687181  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:06.186003  649678 type.go:168] "Request Body" body=""
	I1006 14:23:06.186097  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:06.186563  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:06.686413  649678 type.go:168] "Request Body" body=""
	I1006 14:23:06.686495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:06.686915  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:07.186819  649678 type.go:168] "Request Body" body=""
	I1006 14:23:07.186902  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:07.187335  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:07.187443  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:07.685990  649678 type.go:168] "Request Body" body=""
	I1006 14:23:07.686084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:07.686517  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:08.186341  649678 type.go:168] "Request Body" body=""
	I1006 14:23:08.186420  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:08.186803  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:08.686701  649678 type.go:168] "Request Body" body=""
	I1006 14:23:08.686838  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:08.687297  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:09.186086  649678 type.go:168] "Request Body" body=""
	I1006 14:23:09.186254  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:09.186682  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:09.686607  649678 type.go:168] "Request Body" body=""
	I1006 14:23:09.686743  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:09.687165  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:09.687290  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:10.185924  649678 type.go:168] "Request Body" body=""
	I1006 14:23:10.186016  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:10.186459  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:10.686243  649678 type.go:168] "Request Body" body=""
	I1006 14:23:10.686352  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:10.686735  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:11.186644  649678 type.go:168] "Request Body" body=""
	I1006 14:23:11.186726  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:11.187073  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:11.685855  649678 type.go:168] "Request Body" body=""
	I1006 14:23:11.685945  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:11.686393  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:12.186196  649678 type.go:168] "Request Body" body=""
	I1006 14:23:12.186326  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:12.186700  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:12.186777  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:12.686603  649678 type.go:168] "Request Body" body=""
	I1006 14:23:12.686687  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:12.687185  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:13.186005  649678 type.go:168] "Request Body" body=""
	I1006 14:23:13.186125  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:13.186566  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:13.686384  649678 type.go:168] "Request Body" body=""
	I1006 14:23:13.686489  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:13.686889  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:14.186755  649678 type.go:168] "Request Body" body=""
	I1006 14:23:14.186840  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:14.187235  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:14.187324  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:14.686081  649678 type.go:168] "Request Body" body=""
	I1006 14:23:14.686227  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:14.686581  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:15.186411  649678 type.go:168] "Request Body" body=""
	I1006 14:23:15.186495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:15.186873  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:15.686769  649678 type.go:168] "Request Body" body=""
	I1006 14:23:15.686854  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:15.687247  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:16.186139  649678 type.go:168] "Request Body" body=""
	I1006 14:23:16.186276  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:16.186637  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:16.686871  649678 type.go:168] "Request Body" body=""
	I1006 14:23:16.686955  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:16.687341  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:16.687407  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:17.186133  649678 type.go:168] "Request Body" body=""
	I1006 14:23:17.186292  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:17.186672  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:17.686604  649678 type.go:168] "Request Body" body=""
	I1006 14:23:17.686688  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:17.687115  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:18.185964  649678 type.go:168] "Request Body" body=""
	I1006 14:23:18.186060  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:18.186514  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:18.686315  649678 type.go:168] "Request Body" body=""
	I1006 14:23:18.686410  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:18.686801  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:19.186674  649678 type.go:168] "Request Body" body=""
	I1006 14:23:19.186783  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:19.187188  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:19.187288  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:19.686017  649678 type.go:168] "Request Body" body=""
	I1006 14:23:19.686099  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:19.686535  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:19.749802  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:23:19.804037  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:19.807440  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:19.807591  649678 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:23:20.186477  649678 type.go:168] "Request Body" body=""
	I1006 14:23:20.186600  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:20.186989  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:20.686678  649678 type.go:168] "Request Body" body=""
	I1006 14:23:20.686763  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:20.687137  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:21.186775  649678 type.go:168] "Request Body" body=""
	I1006 14:23:21.186859  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:21.187276  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:21.187355  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:21.686079  649678 type.go:168] "Request Body" body=""
	I1006 14:23:21.686193  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:21.686605  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:22.186165  649678 type.go:168] "Request Body" body=""
	I1006 14:23:22.186276  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:22.186620  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:22.686240  649678 type.go:168] "Request Body" body=""
	I1006 14:23:22.686350  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:22.686763  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:23.186337  649678 type.go:168] "Request Body" body=""
	I1006 14:23:23.186473  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:23.186847  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:23.686573  649678 type.go:168] "Request Body" body=""
	I1006 14:23:23.686658  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:23.687072  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:23.687135  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:24.186778  649678 type.go:168] "Request Body" body=""
	I1006 14:23:24.186877  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:24.187302  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:24.685913  649678 type.go:168] "Request Body" body=""
	I1006 14:23:24.686009  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:24.686431  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:25.186039  649678 type.go:168] "Request Body" body=""
	I1006 14:23:25.186195  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:25.186614  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:25.686319  649678 type.go:168] "Request Body" body=""
	I1006 14:23:25.686432  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:25.686796  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:26.186364  649678 type.go:168] "Request Body" body=""
	I1006 14:23:26.186458  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:26.186842  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:26.186906  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:26.686757  649678 type.go:168] "Request Body" body=""
	I1006 14:23:26.686843  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:26.687175  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:27.186886  649678 type.go:168] "Request Body" body=""
	I1006 14:23:27.187004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:27.187400  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:27.685970  649678 type.go:168] "Request Body" body=""
	I1006 14:23:27.686086  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:27.686508  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:28.186097  649678 type.go:168] "Request Body" body=""
	I1006 14:23:28.186253  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:28.186667  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:28.686303  649678 type.go:168] "Request Body" body=""
	I1006 14:23:28.686394  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:28.686776  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:28.686869  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:29.186361  649678 type.go:168] "Request Body" body=""
	I1006 14:23:29.186534  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:29.186921  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:29.686617  649678 type.go:168] "Request Body" body=""
	I1006 14:23:29.686706  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:29.687093  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:30.185988  649678 type.go:168] "Request Body" body=""
	I1006 14:23:30.186107  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:30.186525  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:30.686173  649678 type.go:168] "Request Body" body=""
	I1006 14:23:30.686284  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:30.686704  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:31.186306  649678 type.go:168] "Request Body" body=""
	I1006 14:23:31.186416  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:31.186796  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:31.186865  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:31.686731  649678 type.go:168] "Request Body" body=""
	I1006 14:23:31.686818  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:31.687245  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:32.185868  649678 type.go:168] "Request Body" body=""
	I1006 14:23:32.185977  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:32.186468  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:32.686055  649678 type.go:168] "Request Body" body=""
	I1006 14:23:32.686249  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:32.686637  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:33.186245  649678 type.go:168] "Request Body" body=""
	I1006 14:23:33.186380  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:33.186741  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:33.686327  649678 type.go:168] "Request Body" body=""
	I1006 14:23:33.686421  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:33.686817  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:33.686882  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:34.186428  649678 type.go:168] "Request Body" body=""
	I1006 14:23:34.186519  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:34.186900  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:34.686601  649678 type.go:168] "Request Body" body=""
	I1006 14:23:34.686693  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:34.687174  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:35.186394  649678 type.go:168] "Request Body" body=""
	I1006 14:23:35.186495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:35.186830  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:35.686544  649678 type.go:168] "Request Body" body=""
	I1006 14:23:35.686676  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:35.687151  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:35.687249  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:36.186429  649678 type.go:168] "Request Body" body=""
	I1006 14:23:36.186525  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:36.186900  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:36.686821  649678 type.go:168] "Request Body" body=""
	I1006 14:23:36.686905  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:36.687296  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:37.185937  649678 type.go:168] "Request Body" body=""
	I1006 14:23:37.186041  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:37.186463  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:37.686057  649678 type.go:168] "Request Body" body=""
	I1006 14:23:37.686134  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:37.686537  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:38.186164  649678 type.go:168] "Request Body" body=""
	I1006 14:23:38.186301  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:38.186719  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:38.186784  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:38.212898  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:23:38.268129  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:38.271217  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:38.271448  649678 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:23:38.274179  649678 out.go:179] * Enabled addons: 
	I1006 14:23:38.275265  649678 addons.go:514] duration metric: took 1m48.200610857s for enable addons: enabled=[]
	I1006 14:23:38.686820  649678 type.go:168] "Request Body" body=""
	I1006 14:23:38.686904  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:38.687336  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:39.186242  649678 type.go:168] "Request Body" body=""
	I1006 14:23:39.186340  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:39.186728  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:39.686616  649678 type.go:168] "Request Body" body=""
	I1006 14:23:39.686713  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:39.687110  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:40.185923  649678 type.go:168] "Request Body" body=""
	I1006 14:23:40.186012  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:40.186440  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:40.686260  649678 type.go:168] "Request Body" body=""
	I1006 14:23:40.686360  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:40.686781  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:40.686870  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:41.186716  649678 type.go:168] "Request Body" body=""
	I1006 14:23:41.186846  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:41.187307  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:41.686117  649678 type.go:168] "Request Body" body=""
	I1006 14:23:41.686264  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:41.686651  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:42.186500  649678 type.go:168] "Request Body" body=""
	I1006 14:23:42.186601  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:42.187000  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:42.686853  649678 type.go:168] "Request Body" body=""
	I1006 14:23:42.686932  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:42.687293  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:42.687369  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:43.186081  649678 type.go:168] "Request Body" body=""
	I1006 14:23:43.186176  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:43.186615  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:43.686377  649678 type.go:168] "Request Body" body=""
	I1006 14:23:43.686461  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:43.686807  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:44.186682  649678 type.go:168] "Request Body" body=""
	I1006 14:23:44.186789  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:44.187155  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:44.685945  649678 type.go:168] "Request Body" body=""
	I1006 14:23:44.686029  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:44.686444  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:45.186221  649678 type.go:168] "Request Body" body=""
	I1006 14:23:45.186326  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:45.186717  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:45.186786  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:45.686681  649678 type.go:168] "Request Body" body=""
	I1006 14:23:45.686763  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:45.687135  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:46.185919  649678 type.go:168] "Request Body" body=""
	I1006 14:23:46.186010  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:46.186423  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:46.686119  649678 type.go:168] "Request Body" body=""
	I1006 14:23:46.686200  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:46.686594  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:47.186343  649678 type.go:168] "Request Body" body=""
	I1006 14:23:47.186428  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:47.186751  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:47.186812  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:47.686582  649678 type.go:168] "Request Body" body=""
	I1006 14:23:47.686670  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:47.687029  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:48.186905  649678 type.go:168] "Request Body" body=""
	I1006 14:23:48.187010  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:48.187415  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:48.686173  649678 type.go:168] "Request Body" body=""
	I1006 14:23:48.686274  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:48.686614  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:49.186426  649678 type.go:168] "Request Body" body=""
	I1006 14:23:49.186559  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:49.187170  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:49.187283  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:49.686055  649678 type.go:168] "Request Body" body=""
	I1006 14:23:49.686162  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:49.686567  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:50.186460  649678 type.go:168] "Request Body" body=""
	I1006 14:23:50.186578  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:50.186980  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:50.686657  649678 type.go:168] "Request Body" body=""
	I1006 14:23:50.686737  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:50.687102  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:51.186780  649678 type.go:168] "Request Body" body=""
	I1006 14:23:51.186879  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:51.187290  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:51.187357  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:51.686055  649678 type.go:168] "Request Body" body=""
	I1006 14:23:51.686146  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:51.686562  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:52.186152  649678 type.go:168] "Request Body" body=""
	I1006 14:23:52.186274  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:52.186663  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:52.686295  649678 type.go:168] "Request Body" body=""
	I1006 14:23:52.686384  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:52.686751  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:53.186373  649678 type.go:168] "Request Body" body=""
	I1006 14:23:53.186476  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:53.186876  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:53.686514  649678 type.go:168] "Request Body" body=""
	I1006 14:23:53.686604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:53.686953  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:53.687018  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:54.186624  649678 type.go:168] "Request Body" body=""
	I1006 14:23:54.186724  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:54.187084  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:54.686709  649678 type.go:168] "Request Body" body=""
	I1006 14:23:54.686798  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:54.687167  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:55.186814  649678 type.go:168] "Request Body" body=""
	I1006 14:23:55.186908  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:55.187298  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:55.685884  649678 type.go:168] "Request Body" body=""
	I1006 14:23:55.685966  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:55.686336  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:56.185959  649678 type.go:168] "Request Body" body=""
	I1006 14:23:56.186053  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:56.186474  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:56.186543  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:56.686257  649678 type.go:168] "Request Body" body=""
	I1006 14:23:56.686340  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:56.686714  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:57.186250  649678 type.go:168] "Request Body" body=""
	I1006 14:23:57.186346  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:57.186713  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:57.686338  649678 type.go:168] "Request Body" body=""
	I1006 14:23:57.686411  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:57.686758  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:58.186346  649678 type.go:168] "Request Body" body=""
	I1006 14:23:58.186462  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:58.186853  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:58.186925  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:58.686513  649678 type.go:168] "Request Body" body=""
	I1006 14:23:58.686597  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:58.686941  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:59.186651  649678 type.go:168] "Request Body" body=""
	I1006 14:23:59.186746  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:59.187144  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:59.686847  649678 type.go:168] "Request Body" body=""
	I1006 14:23:59.686928  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:59.687299  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:00.186276  649678 type.go:168] "Request Body" body=""
	I1006 14:24:00.186374  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:00.186780  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:00.686385  649678 type.go:168] "Request Body" body=""
	I1006 14:24:00.686467  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:00.686835  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:00.686902  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:01.186504  649678 type.go:168] "Request Body" body=""
	I1006 14:24:01.186604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:01.187011  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:01.686898  649678 type.go:168] "Request Body" body=""
	I1006 14:24:01.686984  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:01.687358  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:02.185992  649678 type.go:168] "Request Body" body=""
	I1006 14:24:02.186098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:02.186510  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:02.686060  649678 type.go:168] "Request Body" body=""
	I1006 14:24:02.686175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:02.686581  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:03.186144  649678 type.go:168] "Request Body" body=""
	I1006 14:24:03.186269  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:03.186664  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:03.186735  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:03.686298  649678 type.go:168] "Request Body" body=""
	I1006 14:24:03.686391  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:03.686764  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:04.186331  649678 type.go:168] "Request Body" body=""
	I1006 14:24:04.186436  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:04.186806  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:04.686453  649678 type.go:168] "Request Body" body=""
	I1006 14:24:04.686539  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:04.686904  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:05.186584  649678 type.go:168] "Request Body" body=""
	I1006 14:24:05.186677  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:05.187042  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:05.187118  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:05.686754  649678 type.go:168] "Request Body" body=""
	I1006 14:24:05.686838  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:05.687249  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:06.186882  649678 type.go:168] "Request Body" body=""
	I1006 14:24:06.186978  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:06.187376  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:06.686265  649678 type.go:168] "Request Body" body=""
	I1006 14:24:06.686360  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:06.686739  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:07.186388  649678 type.go:168] "Request Body" body=""
	I1006 14:24:07.186485  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:07.186890  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:07.686565  649678 type.go:168] "Request Body" body=""
	I1006 14:24:07.686740  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:07.687177  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:07.687301  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:08.186834  649678 type.go:168] "Request Body" body=""
	I1006 14:24:08.186933  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:08.187338  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:08.685923  649678 type.go:168] "Request Body" body=""
	I1006 14:24:08.686004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:08.686400  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:09.185982  649678 type.go:168] "Request Body" body=""
	I1006 14:24:09.186075  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:09.186486  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:09.686071  649678 type.go:168] "Request Body" body=""
	I1006 14:24:09.686147  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:09.686609  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:10.186339  649678 type.go:168] "Request Body" body=""
	I1006 14:24:10.186435  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:10.186832  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:10.186914  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:10.686410  649678 type.go:168] "Request Body" body=""
	I1006 14:24:10.686491  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:10.686878  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:11.186499  649678 type.go:168] "Request Body" body=""
	I1006 14:24:11.186603  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:11.186987  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:11.686993  649678 type.go:168] "Request Body" body=""
	I1006 14:24:11.687075  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:11.687486  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:12.186044  649678 type.go:168] "Request Body" body=""
	I1006 14:24:12.186144  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:12.186531  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:12.686100  649678 type.go:168] "Request Body" body=""
	I1006 14:24:12.686192  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:12.686612  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:12.686688  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:13.186239  649678 type.go:168] "Request Body" body=""
	I1006 14:24:13.186332  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:13.186729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:13.686339  649678 type.go:168] "Request Body" body=""
	I1006 14:24:13.686426  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:13.686827  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:14.186505  649678 type.go:168] "Request Body" body=""
	I1006 14:24:14.186600  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:14.186972  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:14.686706  649678 type.go:168] "Request Body" body=""
	I1006 14:24:14.686793  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:14.687271  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:14.687344  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:15.186857  649678 type.go:168] "Request Body" body=""
	I1006 14:24:15.186949  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:15.187318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:15.685900  649678 type.go:168] "Request Body" body=""
	I1006 14:24:15.686005  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:15.686504  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:16.186073  649678 type.go:168] "Request Body" body=""
	I1006 14:24:16.186167  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:16.186557  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:16.686576  649678 type.go:168] "Request Body" body=""
	I1006 14:24:16.686657  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:16.687039  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:17.186833  649678 type.go:168] "Request Body" body=""
	I1006 14:24:17.186929  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:17.187333  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:17.187429  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:17.685958  649678 type.go:168] "Request Body" body=""
	I1006 14:24:17.686060  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:17.686506  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:18.186267  649678 type.go:168] "Request Body" body=""
	I1006 14:24:18.186350  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:18.186723  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:18.686325  649678 type.go:168] "Request Body" body=""
	I1006 14:24:18.686420  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:18.686789  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:19.186406  649678 type.go:168] "Request Body" body=""
	I1006 14:24:19.186488  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:19.186868  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:19.686567  649678 type.go:168] "Request Body" body=""
	I1006 14:24:19.686656  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:19.687081  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:19.687166  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:20.185993  649678 type.go:168] "Request Body" body=""
	I1006 14:24:20.186081  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:20.186515  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:20.686127  649678 type.go:168] "Request Body" body=""
	I1006 14:24:20.686261  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:20.686672  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:21.186285  649678 type.go:168] "Request Body" body=""
	I1006 14:24:21.186378  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:21.186760  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:21.686689  649678 type.go:168] "Request Body" body=""
	I1006 14:24:21.686806  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:21.687270  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:21.687343  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:22.186875  649678 type.go:168] "Request Body" body=""
	I1006 14:24:22.186957  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:22.187340  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:22.685917  649678 type.go:168] "Request Body" body=""
	I1006 14:24:22.686001  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:22.686421  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:23.186006  649678 type.go:168] "Request Body" body=""
	I1006 14:24:23.186098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:23.186524  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:23.686088  649678 type.go:168] "Request Body" body=""
	I1006 14:24:23.686169  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:23.686561  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:24.186157  649678 type.go:168] "Request Body" body=""
	I1006 14:24:24.186277  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:24.186678  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:24.186752  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:24.686265  649678 type.go:168] "Request Body" body=""
	I1006 14:24:24.686340  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:24.686724  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:25.186308  649678 type.go:168] "Request Body" body=""
	I1006 14:24:25.186403  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:25.186836  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:25.686416  649678 type.go:168] "Request Body" body=""
	I1006 14:24:25.686502  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:25.686869  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:26.186513  649678 type.go:168] "Request Body" body=""
	I1006 14:24:26.186607  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:26.186966  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:26.187036  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:26.686743  649678 type.go:168] "Request Body" body=""
	I1006 14:24:26.686828  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:26.687232  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:27.186830  649678 type.go:168] "Request Body" body=""
	I1006 14:24:27.186956  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:27.187284  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:27.685892  649678 type.go:168] "Request Body" body=""
	I1006 14:24:27.685969  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:27.686452  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:28.185994  649678 type.go:168] "Request Body" body=""
	I1006 14:24:28.186085  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:28.186516  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:28.686092  649678 type.go:168] "Request Body" body=""
	I1006 14:24:28.686226  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:28.686600  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:28.686667  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:29.186232  649678 type.go:168] "Request Body" body=""
	I1006 14:24:29.186318  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:29.186686  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:29.686272  649678 type.go:168] "Request Body" body=""
	I1006 14:24:29.686385  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:29.686803  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:30.186682  649678 type.go:168] "Request Body" body=""
	I1006 14:24:30.186770  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:30.187128  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:30.686899  649678 type.go:168] "Request Body" body=""
	I1006 14:24:30.687000  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:30.687446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:30.687521  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:31.186005  649678 type.go:168] "Request Body" body=""
	I1006 14:24:31.186092  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:31.186508  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:31.686473  649678 type.go:168] "Request Body" body=""
	I1006 14:24:31.686563  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:31.686985  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:32.186673  649678 type.go:168] "Request Body" body=""
	I1006 14:24:32.186756  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:32.187112  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:32.686831  649678 type.go:168] "Request Body" body=""
	I1006 14:24:32.686918  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:32.687304  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:33.185919  649678 type.go:168] "Request Body" body=""
	I1006 14:24:33.186004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:33.186403  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:33.186477  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:33.685961  649678 type.go:168] "Request Body" body=""
	I1006 14:24:33.686072  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:33.686452  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:34.186033  649678 type.go:168] "Request Body" body=""
	I1006 14:24:34.186116  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:34.186521  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:34.686098  649678 type.go:168] "Request Body" body=""
	I1006 14:24:34.686233  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:34.686619  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:35.186193  649678 type.go:168] "Request Body" body=""
	I1006 14:24:35.186290  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:35.186663  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:35.186737  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:35.686293  649678 type.go:168] "Request Body" body=""
	I1006 14:24:35.686406  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:35.686798  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:36.186344  649678 type.go:168] "Request Body" body=""
	I1006 14:24:36.186419  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:36.186746  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:36.686564  649678 type.go:168] "Request Body" body=""
	I1006 14:24:36.686654  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:36.687044  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:37.186671  649678 type.go:168] "Request Body" body=""
	I1006 14:24:37.186749  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:37.187101  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:37.187190  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:37.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:24:37.686844  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:37.687282  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:38.186015  649678 type.go:168] "Request Body" body=""
	I1006 14:24:38.186100  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:38.186512  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:38.686083  649678 type.go:168] "Request Body" body=""
	I1006 14:24:38.686160  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:38.686534  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:39.186147  649678 type.go:168] "Request Body" body=""
	I1006 14:24:39.186264  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:39.186629  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:39.686351  649678 type.go:168] "Request Body" body=""
	I1006 14:24:39.686445  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:39.686834  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:39.686903  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:40.186723  649678 type.go:168] "Request Body" body=""
	I1006 14:24:40.186824  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:40.187257  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:40.686897  649678 type.go:168] "Request Body" body=""
	I1006 14:24:40.686983  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:40.687415  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:41.186000  649678 type.go:168] "Request Body" body=""
	I1006 14:24:41.186080  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:41.186497  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:41.686311  649678 type.go:168] "Request Body" body=""
	I1006 14:24:41.686398  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:41.686747  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:42.186394  649678 type.go:168] "Request Body" body=""
	I1006 14:24:42.186477  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:42.186829  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:42.186909  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:42.686365  649678 type.go:168] "Request Body" body=""
	I1006 14:24:42.686458  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:42.686828  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:43.186364  649678 type.go:168] "Request Body" body=""
	I1006 14:24:43.186453  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:43.186835  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:43.686404  649678 type.go:168] "Request Body" body=""
	I1006 14:24:43.686479  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:43.686829  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:44.186419  649678 type.go:168] "Request Body" body=""
	I1006 14:24:44.186497  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:44.186840  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:44.686503  649678 type.go:168] "Request Body" body=""
	I1006 14:24:44.686579  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:44.686908  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:44.686976  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:45.186546  649678 type.go:168] "Request Body" body=""
	I1006 14:24:45.186633  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:45.186973  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:45.686633  649678 type.go:168] "Request Body" body=""
	I1006 14:24:45.686722  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:45.687066  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:46.186715  649678 type.go:168] "Request Body" body=""
	I1006 14:24:46.186798  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:46.187164  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:46.686921  649678 type.go:168] "Request Body" body=""
	I1006 14:24:46.687008  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:46.687441  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:46.687511  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:47.186093  649678 type.go:168] "Request Body" body=""
	I1006 14:24:47.186175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:47.186548  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:47.686128  649678 type.go:168] "Request Body" body=""
	I1006 14:24:47.686233  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:47.686613  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:48.186260  649678 type.go:168] "Request Body" body=""
	I1006 14:24:48.186345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:48.186715  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:48.686317  649678 type.go:168] "Request Body" body=""
	I1006 14:24:48.686416  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:48.686787  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:49.186383  649678 type.go:168] "Request Body" body=""
	I1006 14:24:49.186483  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:49.186862  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:49.186934  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:49.686547  649678 type.go:168] "Request Body" body=""
	I1006 14:24:49.686630  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:49.687018  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:50.186932  649678 type.go:168] "Request Body" body=""
	I1006 14:24:50.187020  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:50.187392  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:50.685995  649678 type.go:168] "Request Body" body=""
	I1006 14:24:50.686087  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:50.686639  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:51.186241  649678 type.go:168] "Request Body" body=""
	I1006 14:24:51.186321  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:51.186677  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:51.686524  649678 type.go:168] "Request Body" body=""
	I1006 14:24:51.686604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:51.686971  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:51.687045  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:52.186636  649678 type.go:168] "Request Body" body=""
	I1006 14:24:52.186724  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:52.187108  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:52.686753  649678 type.go:168] "Request Body" body=""
	I1006 14:24:52.686831  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:52.687267  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:53.185896  649678 type.go:168] "Request Body" body=""
	I1006 14:24:53.185979  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:53.186366  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:53.685914  649678 type.go:168] "Request Body" body=""
	I1006 14:24:53.685990  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:53.686334  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:54.185922  649678 type.go:168] "Request Body" body=""
	I1006 14:24:54.186002  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:54.186408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:54.186489  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:54.685967  649678 type.go:168] "Request Body" body=""
	I1006 14:24:54.686051  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:54.686451  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:55.186040  649678 type.go:168] "Request Body" body=""
	I1006 14:24:55.186122  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:55.186477  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:55.686036  649678 type.go:168] "Request Body" body=""
	I1006 14:24:55.686113  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:55.686480  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:56.186026  649678 type.go:168] "Request Body" body=""
	I1006 14:24:56.186104  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:56.186478  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:56.186550  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:56.686248  649678 type.go:168] "Request Body" body=""
	I1006 14:24:56.686329  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:56.686693  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:57.186234  649678 type.go:168] "Request Body" body=""
	I1006 14:24:57.186315  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:57.186630  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:57.686283  649678 type.go:168] "Request Body" body=""
	I1006 14:24:57.686402  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:57.686814  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:58.186365  649678 type.go:168] "Request Body" body=""
	I1006 14:24:58.186450  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:58.186794  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:58.186858  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:58.686485  649678 type.go:168] "Request Body" body=""
	I1006 14:24:58.686625  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:58.687000  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:59.186645  649678 type.go:168] "Request Body" body=""
	I1006 14:24:59.186728  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:59.187067  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:59.686701  649678 type.go:168] "Request Body" body=""
	I1006 14:24:59.686778  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:59.687158  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:00.185971  649678 type.go:168] "Request Body" body=""
	I1006 14:25:00.186051  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:00.186405  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:00.686037  649678 type.go:168] "Request Body" body=""
	I1006 14:25:00.686117  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:00.686528  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:00.686606  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:01.186098  649678 type.go:168] "Request Body" body=""
	I1006 14:25:01.186186  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:01.186639  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:01.686574  649678 type.go:168] "Request Body" body=""
	I1006 14:25:01.686664  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:01.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:02.186731  649678 type.go:168] "Request Body" body=""
	I1006 14:25:02.186819  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:02.187259  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:02.685880  649678 type.go:168] "Request Body" body=""
	I1006 14:25:02.685972  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:02.686460  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:03.186037  649678 type.go:168] "Request Body" body=""
	I1006 14:25:03.186117  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:03.186526  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:03.186595  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:03.686186  649678 type.go:168] "Request Body" body=""
	I1006 14:25:03.686282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:03.686638  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:04.186251  649678 type.go:168] "Request Body" body=""
	I1006 14:25:04.186325  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:04.186672  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:04.686261  649678 type.go:168] "Request Body" body=""
	I1006 14:25:04.686346  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:04.686697  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:05.186293  649678 type.go:168] "Request Body" body=""
	I1006 14:25:05.186374  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:05.186780  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:05.186857  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:05.686332  649678 type.go:168] "Request Body" body=""
	I1006 14:25:05.686416  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:05.686772  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:06.186370  649678 type.go:168] "Request Body" body=""
	I1006 14:25:06.186449  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:06.186819  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:06.686670  649678 type.go:168] "Request Body" body=""
	I1006 14:25:06.686749  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:06.687114  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:07.186765  649678 type.go:168] "Request Body" body=""
	I1006 14:25:07.186854  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:07.187255  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:07.187328  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:07.686866  649678 type.go:168] "Request Body" body=""
	I1006 14:25:07.686945  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:07.687337  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:08.185991  649678 type.go:168] "Request Body" body=""
	I1006 14:25:08.186073  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:08.186473  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:08.686026  649678 type.go:168] "Request Body" body=""
	I1006 14:25:08.686101  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:08.686467  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:09.186027  649678 type.go:168] "Request Body" body=""
	I1006 14:25:09.186117  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:09.186491  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:09.686131  649678 type.go:168] "Request Body" body=""
	I1006 14:25:09.686218  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:09.686554  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:09.686624  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:10.186421  649678 type.go:168] "Request Body" body=""
	I1006 14:25:10.186509  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:10.186885  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:10.686589  649678 type.go:168] "Request Body" body=""
	I1006 14:25:10.686673  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:10.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:11.186451  649678 type.go:168] "Request Body" body=""
	I1006 14:25:11.186534  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:11.186908  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:11.686874  649678 type.go:168] "Request Body" body=""
	I1006 14:25:11.686958  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:11.687404  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:11.687478  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:12.186004  649678 type.go:168] "Request Body" body=""
	I1006 14:25:12.186089  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:12.186488  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:12.686071  649678 type.go:168] "Request Body" body=""
	I1006 14:25:12.686175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:12.686583  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:13.186311  649678 type.go:168] "Request Body" body=""
	I1006 14:25:13.186394  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:13.186794  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:13.686469  649678 type.go:168] "Request Body" body=""
	I1006 14:25:13.686560  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:13.686955  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:14.186674  649678 type.go:168] "Request Body" body=""
	I1006 14:25:14.186764  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:14.187198  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:14.187305  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:14.686830  649678 type.go:168] "Request Body" body=""
	I1006 14:25:14.686915  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:14.687318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:15.185883  649678 type.go:168] "Request Body" body=""
	I1006 14:25:15.185963  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:15.186381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:15.685988  649678 type.go:168] "Request Body" body=""
	I1006 14:25:15.686075  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:15.686471  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:16.186057  649678 type.go:168] "Request Body" body=""
	I1006 14:25:16.186159  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:16.186628  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:16.686506  649678 type.go:168] "Request Body" body=""
	I1006 14:25:16.686586  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:16.686922  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:16.686991  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:17.186686  649678 type.go:168] "Request Body" body=""
	I1006 14:25:17.186779  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:17.187190  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:17.686871  649678 type.go:168] "Request Body" body=""
	I1006 14:25:17.686958  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:17.687378  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:18.185930  649678 type.go:168] "Request Body" body=""
	I1006 14:25:18.186011  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:18.186362  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:18.686006  649678 type.go:168] "Request Body" body=""
	I1006 14:25:18.686091  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:18.686522  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:19.186154  649678 type.go:168] "Request Body" body=""
	I1006 14:25:19.186270  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:19.186661  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:19.186738  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:19.686272  649678 type.go:168] "Request Body" body=""
	I1006 14:25:19.686357  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:19.686722  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:20.186620  649678 type.go:168] "Request Body" body=""
	I1006 14:25:20.186712  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:20.187085  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:20.686732  649678 type.go:168] "Request Body" body=""
	I1006 14:25:20.686813  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:20.687200  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:21.186886  649678 type.go:168] "Request Body" body=""
	I1006 14:25:21.186971  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:21.187421  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:21.187498  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:21.686192  649678 type.go:168] "Request Body" body=""
	I1006 14:25:21.686313  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:21.686703  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:22.186337  649678 type.go:168] "Request Body" body=""
	I1006 14:25:22.186443  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:22.186816  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:22.686392  649678 type.go:168] "Request Body" body=""
	I1006 14:25:22.686470  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:22.686872  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:23.186538  649678 type.go:168] "Request Body" body=""
	I1006 14:25:23.186623  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:23.186990  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:23.686645  649678 type.go:168] "Request Body" body=""
	I1006 14:25:23.686745  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:23.687147  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:23.687255  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:24.186838  649678 type.go:168] "Request Body" body=""
	I1006 14:25:24.186917  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:24.187309  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:24.685862  649678 type.go:168] "Request Body" body=""
	I1006 14:25:24.685944  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:24.686370  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:25.185903  649678 type.go:168] "Request Body" body=""
	I1006 14:25:25.185979  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:25.186373  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:25.685951  649678 type.go:168] "Request Body" body=""
	I1006 14:25:25.686032  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:25.686450  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:26.186018  649678 type.go:168] "Request Body" body=""
	I1006 14:25:26.186098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:26.186497  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:26.186566  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:26.686293  649678 type.go:168] "Request Body" body=""
	I1006 14:25:26.686378  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:26.686746  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:27.186364  649678 type.go:168] "Request Body" body=""
	I1006 14:25:27.186454  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:27.186827  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:27.686418  649678 type.go:168] "Request Body" body=""
	I1006 14:25:27.686503  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:27.686844  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:28.186581  649678 type.go:168] "Request Body" body=""
	I1006 14:25:28.186676  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:28.187085  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:28.187196  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:28.686665  649678 type.go:168] "Request Body" body=""
	I1006 14:25:28.686737  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:28.687051  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:29.186712  649678 type.go:168] "Request Body" body=""
	I1006 14:25:29.186801  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:29.187161  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:29.685861  649678 type.go:168] "Request Body" body=""
	I1006 14:25:29.685951  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:29.686323  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:30.186241  649678 type.go:168] "Request Body" body=""
	I1006 14:25:30.186336  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:30.186725  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:30.686347  649678 type.go:168] "Request Body" body=""
	I1006 14:25:30.686438  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:30.686799  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:30.686867  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:31.186356  649678 type.go:168] "Request Body" body=""
	I1006 14:25:31.186436  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:31.186790  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:31.686720  649678 type.go:168] "Request Body" body=""
	I1006 14:25:31.686801  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:31.687239  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:32.186431  649678 type.go:168] "Request Body" body=""
	I1006 14:25:32.186515  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:32.186873  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:32.686520  649678 type.go:168] "Request Body" body=""
	I1006 14:25:32.686601  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:32.686977  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:32.687047  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:33.186626  649678 type.go:168] "Request Body" body=""
	I1006 14:25:33.186710  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:33.187075  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:33.686716  649678 type.go:168] "Request Body" body=""
	I1006 14:25:33.686805  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:33.687167  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:34.186823  649678 type.go:168] "Request Body" body=""
	I1006 14:25:34.186903  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:34.187273  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:34.685846  649678 type.go:168] "Request Body" body=""
	I1006 14:25:34.685928  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:34.686316  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:35.185913  649678 type.go:168] "Request Body" body=""
	I1006 14:25:35.186011  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:35.186468  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:35.186536  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:35.686056  649678 type.go:168] "Request Body" body=""
	I1006 14:25:35.686142  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:35.686600  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:36.186122  649678 type.go:168] "Request Body" body=""
	I1006 14:25:36.186200  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:36.186601  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:36.686430  649678 type.go:168] "Request Body" body=""
	I1006 14:25:36.686510  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:36.686854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:37.186453  649678 type.go:168] "Request Body" body=""
	I1006 14:25:37.186544  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:37.186881  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:37.186946  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:37.686555  649678 type.go:168] "Request Body" body=""
	I1006 14:25:37.686635  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:37.686983  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:38.186591  649678 type.go:168] "Request Body" body=""
	I1006 14:25:38.186672  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:38.187012  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:38.686677  649678 type.go:168] "Request Body" body=""
	I1006 14:25:38.686752  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:38.687074  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:39.186406  649678 type.go:168] "Request Body" body=""
	I1006 14:25:39.186486  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:39.186779  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:39.686380  649678 type.go:168] "Request Body" body=""
	I1006 14:25:39.686456  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:39.686788  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:39.686849  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:40.186552  649678 type.go:168] "Request Body" body=""
	I1006 14:25:40.186636  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:40.186983  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:40.686686  649678 type.go:168] "Request Body" body=""
	I1006 14:25:40.686772  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:40.687136  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:41.186786  649678 type.go:168] "Request Body" body=""
	I1006 14:25:41.186883  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:41.187296  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:41.686115  649678 type.go:168] "Request Body" body=""
	I1006 14:25:41.686197  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:41.686611  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:42.186247  649678 type.go:168] "Request Body" body=""
	I1006 14:25:42.186349  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:42.186752  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:42.186818  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:42.686348  649678 type.go:168] "Request Body" body=""
	I1006 14:25:42.686429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:42.686809  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:43.186383  649678 type.go:168] "Request Body" body=""
	I1006 14:25:43.186476  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:43.186825  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:43.686373  649678 type.go:168] "Request Body" body=""
	I1006 14:25:43.686447  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:43.686785  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:44.186380  649678 type.go:168] "Request Body" body=""
	I1006 14:25:44.186471  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:44.186817  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:44.186878  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:44.686508  649678 type.go:168] "Request Body" body=""
	I1006 14:25:44.686586  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:44.686949  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:45.186631  649678 type.go:168] "Request Body" body=""
	I1006 14:25:45.186709  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:45.187070  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:45.686683  649678 type.go:168] "Request Body" body=""
	I1006 14:25:45.686760  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:45.687117  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:46.186771  649678 type.go:168] "Request Body" body=""
	I1006 14:25:46.186856  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:46.187161  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:46.187239  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:46.685960  649678 type.go:168] "Request Body" body=""
	I1006 14:25:46.686053  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:46.686491  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:47.186117  649678 type.go:168] "Request Body" body=""
	I1006 14:25:47.186232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:47.186563  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:47.686262  649678 type.go:168] "Request Body" body=""
	I1006 14:25:47.686353  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:47.686735  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:48.186344  649678 type.go:168] "Request Body" body=""
	I1006 14:25:48.186436  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:48.186775  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:48.686380  649678 type.go:168] "Request Body" body=""
	I1006 14:25:48.686477  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:48.686837  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:48.686901  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:49.186520  649678 type.go:168] "Request Body" body=""
	I1006 14:25:49.186599  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:49.186960  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:49.686576  649678 type.go:168] "Request Body" body=""
	I1006 14:25:49.686696  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:49.687078  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:50.186881  649678 type.go:168] "Request Body" body=""
	I1006 14:25:50.186973  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:50.187437  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:50.685990  649678 type.go:168] "Request Body" body=""
	I1006 14:25:50.686074  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:50.686473  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:51.186300  649678 type.go:168] "Request Body" body=""
	I1006 14:25:51.186379  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:51.186743  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:51.186811  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:51.686703  649678 type.go:168] "Request Body" body=""
	I1006 14:25:51.686798  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:51.687173  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:52.186898  649678 type.go:168] "Request Body" body=""
	I1006 14:25:52.186995  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:52.187412  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:52.686051  649678 type.go:168] "Request Body" body=""
	I1006 14:25:52.686131  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:52.686542  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:53.186148  649678 type.go:168] "Request Body" body=""
	I1006 14:25:53.186271  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:53.186618  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:53.686257  649678 type.go:168] "Request Body" body=""
	I1006 14:25:53.686333  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:53.686629  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:53.686692  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:54.186270  649678 type.go:168] "Request Body" body=""
	I1006 14:25:54.186349  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:54.186708  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:54.686271  649678 type.go:168] "Request Body" body=""
	I1006 14:25:54.686350  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:54.686763  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:55.186342  649678 type.go:168] "Request Body" body=""
	I1006 14:25:55.186429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:55.186784  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:55.686364  649678 type.go:168] "Request Body" body=""
	I1006 14:25:55.686460  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:55.686892  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:55.686972  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:56.186543  649678 type.go:168] "Request Body" body=""
	I1006 14:25:56.186621  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:56.186957  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:56.686715  649678 type.go:168] "Request Body" body=""
	I1006 14:25:56.686790  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:56.687141  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:57.186851  649678 type.go:168] "Request Body" body=""
	I1006 14:25:57.186936  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:57.187306  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:57.686906  649678 type.go:168] "Request Body" body=""
	I1006 14:25:57.686983  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:57.687342  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:57.687412  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:58.185932  649678 type.go:168] "Request Body" body=""
	I1006 14:25:58.186017  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:58.186400  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:58.685929  649678 type.go:168] "Request Body" body=""
	I1006 14:25:58.686006  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:58.686337  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:59.185922  649678 type.go:168] "Request Body" body=""
	I1006 14:25:59.186001  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:59.186386  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:59.685924  649678 type.go:168] "Request Body" body=""
	I1006 14:25:59.686004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:59.686375  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:00.186296  649678 type.go:168] "Request Body" body=""
	I1006 14:26:00.186378  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:00.186687  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:00.186765  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:00.686277  649678 type.go:168] "Request Body" body=""
	I1006 14:26:00.686360  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:00.686729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:01.186343  649678 type.go:168] "Request Body" body=""
	I1006 14:26:01.186429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:01.186799  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:01.686640  649678 type.go:168] "Request Body" body=""
	I1006 14:26:01.686731  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:01.687113  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:02.186812  649678 type.go:168] "Request Body" body=""
	I1006 14:26:02.186901  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:02.187298  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:02.187363  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:02.686912  649678 type.go:168] "Request Body" body=""
	I1006 14:26:02.686991  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:02.687387  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:03.186002  649678 type.go:168] "Request Body" body=""
	I1006 14:26:03.186084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:03.186473  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:03.685977  649678 type.go:168] "Request Body" body=""
	I1006 14:26:03.686048  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:03.686381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:04.185981  649678 type.go:168] "Request Body" body=""
	I1006 14:26:04.186057  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:04.186423  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:04.685971  649678 type.go:168] "Request Body" body=""
	I1006 14:26:04.686060  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:04.686445  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:04.686508  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:05.186070  649678 type.go:168] "Request Body" body=""
	I1006 14:26:05.186157  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:05.186570  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:05.686148  649678 type.go:168] "Request Body" body=""
	I1006 14:26:05.686264  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:05.686629  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:06.186273  649678 type.go:168] "Request Body" body=""
	I1006 14:26:06.186358  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:06.186714  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:06.686539  649678 type.go:168] "Request Body" body=""
	I1006 14:26:06.686626  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:06.686991  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:06.687057  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:07.186691  649678 type.go:168] "Request Body" body=""
	I1006 14:26:07.186766  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:07.187071  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:07.686715  649678 type.go:168] "Request Body" body=""
	I1006 14:26:07.686797  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:07.687168  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:08.186877  649678 type.go:168] "Request Body" body=""
	I1006 14:26:08.186969  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:08.187376  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:08.685874  649678 type.go:168] "Request Body" body=""
	I1006 14:26:08.685947  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:08.686343  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:09.185901  649678 type.go:168] "Request Body" body=""
	I1006 14:26:09.185986  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:09.186361  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:09.186422  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:09.685934  649678 type.go:168] "Request Body" body=""
	I1006 14:26:09.686008  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:09.686381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:10.186337  649678 type.go:168] "Request Body" body=""
	I1006 14:26:10.186418  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:10.186799  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:10.686458  649678 type.go:168] "Request Body" body=""
	I1006 14:26:10.686543  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:10.686962  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:11.186624  649678 type.go:168] "Request Body" body=""
	I1006 14:26:11.186717  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:11.187101  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:11.187175  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:11.685850  649678 type.go:168] "Request Body" body=""
	I1006 14:26:11.685927  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:11.686323  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:12.185918  649678 type.go:168] "Request Body" body=""
	I1006 14:26:12.185998  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:12.186408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:12.686005  649678 type.go:168] "Request Body" body=""
	I1006 14:26:12.686089  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:12.686517  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:13.186107  649678 type.go:168] "Request Body" body=""
	I1006 14:26:13.186230  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:13.186588  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:13.686197  649678 type.go:168] "Request Body" body=""
	I1006 14:26:13.686355  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:13.686711  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:13.686772  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:14.186309  649678 type.go:168] "Request Body" body=""
	I1006 14:26:14.186392  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:14.186749  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:14.686366  649678 type.go:168] "Request Body" body=""
	I1006 14:26:14.686450  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:14.686778  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:15.185991  649678 type.go:168] "Request Body" body=""
	I1006 14:26:15.186103  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:15.186529  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:15.686135  649678 type.go:168] "Request Body" body=""
	I1006 14:26:15.686243  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:15.686610  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:16.186323  649678 type.go:168] "Request Body" body=""
	I1006 14:26:16.186429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:16.186768  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:16.186838  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:16.686609  649678 type.go:168] "Request Body" body=""
	I1006 14:26:16.686694  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:16.687041  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:17.186702  649678 type.go:168] "Request Body" body=""
	I1006 14:26:17.186792  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:17.187231  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:17.686866  649678 type.go:168] "Request Body" body=""
	I1006 14:26:17.686950  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:17.687324  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:18.185952  649678 type.go:168] "Request Body" body=""
	I1006 14:26:18.186030  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:18.186428  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:18.685978  649678 type.go:168] "Request Body" body=""
	I1006 14:26:18.686051  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:18.686440  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:18.686507  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:19.186006  649678 type.go:168] "Request Body" body=""
	I1006 14:26:19.186087  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:19.186501  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:19.686063  649678 type.go:168] "Request Body" body=""
	I1006 14:26:19.686139  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:19.686531  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:20.186356  649678 type.go:168] "Request Body" body=""
	I1006 14:26:20.186443  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:20.186802  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:20.686408  649678 type.go:168] "Request Body" body=""
	I1006 14:26:20.686495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:20.686850  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:20.686922  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:21.186511  649678 type.go:168] "Request Body" body=""
	I1006 14:26:21.186587  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:21.186942  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:21.686813  649678 type.go:168] "Request Body" body=""
	I1006 14:26:21.686900  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:21.687313  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:22.185849  649678 type.go:168] "Request Body" body=""
	I1006 14:26:22.185931  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:22.186339  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:22.685929  649678 type.go:168] "Request Body" body=""
	I1006 14:26:22.686007  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:22.686413  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:23.186016  649678 type.go:168] "Request Body" body=""
	I1006 14:26:23.186102  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:23.186494  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:23.186565  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:23.686035  649678 type.go:168] "Request Body" body=""
	I1006 14:26:23.686107  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:23.686482  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:24.186086  649678 type.go:168] "Request Body" body=""
	I1006 14:26:24.186175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:24.186554  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:24.686126  649678 type.go:168] "Request Body" body=""
	I1006 14:26:24.686237  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:24.686577  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:25.186280  649678 type.go:168] "Request Body" body=""
	I1006 14:26:25.186363  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:25.186729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:25.186793  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:25.686357  649678 type.go:168] "Request Body" body=""
	I1006 14:26:25.686450  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:25.686832  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:26.186509  649678 type.go:168] "Request Body" body=""
	I1006 14:26:26.186599  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:26.186933  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:26.686731  649678 type.go:168] "Request Body" body=""
	I1006 14:26:26.686807  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:26.687178  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:27.186830  649678 type.go:168] "Request Body" body=""
	I1006 14:26:27.186916  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:27.187303  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:27.187367  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:27.685989  649678 type.go:168] "Request Body" body=""
	I1006 14:26:27.686079  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:27.686515  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:28.186104  649678 type.go:168] "Request Body" body=""
	I1006 14:26:28.186234  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:28.186665  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:28.686340  649678 type.go:168] "Request Body" body=""
	I1006 14:26:28.686435  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:28.686828  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:29.186495  649678 type.go:168] "Request Body" body=""
	I1006 14:26:29.186583  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:29.186957  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:29.686668  649678 type.go:168] "Request Body" body=""
	I1006 14:26:29.686747  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:29.687084  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:29.687155  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:30.185982  649678 type.go:168] "Request Body" body=""
	I1006 14:26:30.186084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:30.186533  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:30.686149  649678 type.go:168] "Request Body" body=""
	I1006 14:26:30.686258  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:30.686621  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:31.186197  649678 type.go:168] "Request Body" body=""
	I1006 14:26:31.186328  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:31.186681  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:31.686544  649678 type.go:168] "Request Body" body=""
	I1006 14:26:31.686625  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:31.687002  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:32.186625  649678 type.go:168] "Request Body" body=""
	I1006 14:26:32.186728  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:32.187110  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:32.187243  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:32.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:26:32.686849  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:32.687250  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:33.185866  649678 type.go:168] "Request Body" body=""
	I1006 14:26:33.185966  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:33.186401  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:33.685998  649678 type.go:168] "Request Body" body=""
	I1006 14:26:33.686076  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:33.686491  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:34.186036  649678 type.go:168] "Request Body" body=""
	I1006 14:26:34.186137  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:34.186537  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:34.686069  649678 type.go:168] "Request Body" body=""
	I1006 14:26:34.686144  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:34.686500  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:34.686564  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:35.186170  649678 type.go:168] "Request Body" body=""
	I1006 14:26:35.186296  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:35.186675  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:35.686291  649678 type.go:168] "Request Body" body=""
	I1006 14:26:35.686375  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:35.686758  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:36.186396  649678 type.go:168] "Request Body" body=""
	I1006 14:26:36.186499  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:36.186883  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:36.686651  649678 type.go:168] "Request Body" body=""
	I1006 14:26:36.686732  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:36.687079  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:36.687145  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:37.186756  649678 type.go:168] "Request Body" body=""
	I1006 14:26:37.186868  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:37.187300  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:37.685900  649678 type.go:168] "Request Body" body=""
	I1006 14:26:37.686015  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:37.686475  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:38.186110  649678 type.go:168] "Request Body" body=""
	I1006 14:26:38.186226  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:38.186598  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:38.686176  649678 type.go:168] "Request Body" body=""
	I1006 14:26:38.686303  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:38.686658  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:39.186240  649678 type.go:168] "Request Body" body=""
	I1006 14:26:39.186320  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:39.186682  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:39.186749  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:39.686298  649678 type.go:168] "Request Body" body=""
	I1006 14:26:39.686387  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:39.686746  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:40.186587  649678 type.go:168] "Request Body" body=""
	I1006 14:26:40.186667  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:40.187038  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:40.686696  649678 type.go:168] "Request Body" body=""
	I1006 14:26:40.686801  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:40.687169  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:41.186829  649678 type.go:168] "Request Body" body=""
	I1006 14:26:41.186908  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:41.187312  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:41.187383  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:41.686029  649678 type.go:168] "Request Body" body=""
	I1006 14:26:41.686108  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:41.686522  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:42.186071  649678 type.go:168] "Request Body" body=""
	I1006 14:26:42.186168  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:42.186549  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:42.686104  649678 type.go:168] "Request Body" body=""
	I1006 14:26:42.686190  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:42.686575  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:43.186140  649678 type.go:168] "Request Body" body=""
	I1006 14:26:43.186255  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:43.186605  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:43.686244  649678 type.go:168] "Request Body" body=""
	I1006 14:26:43.686321  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:43.686657  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:43.686731  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:44.186303  649678 type.go:168] "Request Body" body=""
	I1006 14:26:44.186390  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:44.186758  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:44.686323  649678 type.go:168] "Request Body" body=""
	I1006 14:26:44.686402  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:44.686737  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:45.186332  649678 type.go:168] "Request Body" body=""
	I1006 14:26:45.186410  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:45.186776  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:45.686331  649678 type.go:168] "Request Body" body=""
	I1006 14:26:45.686415  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:45.686779  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:45.686856  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:46.186339  649678 type.go:168] "Request Body" body=""
	I1006 14:26:46.186430  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:46.186785  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:46.686621  649678 type.go:168] "Request Body" body=""
	I1006 14:26:46.686715  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:46.687061  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:47.186713  649678 type.go:168] "Request Body" body=""
	I1006 14:26:47.186815  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:47.187185  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:47.686868  649678 type.go:168] "Request Body" body=""
	I1006 14:26:47.686957  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:47.687305  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:47.687372  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:48.185956  649678 type.go:168] "Request Body" body=""
	I1006 14:26:48.186058  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:48.186446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:48.686113  649678 type.go:168] "Request Body" body=""
	I1006 14:26:48.686236  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:48.686589  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:49.186156  649678 type.go:168] "Request Body" body=""
	I1006 14:26:49.186290  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:49.186679  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:49.686186  649678 type.go:168] "Request Body" body=""
	I1006 14:26:49.686282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:49.686588  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:50.186404  649678 type.go:168] "Request Body" body=""
	I1006 14:26:50.186506  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:50.186917  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:50.186990  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:50.686607  649678 type.go:168] "Request Body" body=""
	I1006 14:26:50.686695  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:50.687128  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:51.186788  649678 type.go:168] "Request Body" body=""
	I1006 14:26:51.186968  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:51.187381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:51.686169  649678 type.go:168] "Request Body" body=""
	I1006 14:26:51.686282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:51.686666  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:52.186376  649678 type.go:168] "Request Body" body=""
	I1006 14:26:52.186493  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:52.186854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:52.686550  649678 type.go:168] "Request Body" body=""
	I1006 14:26:52.686631  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:52.686915  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:52.686968  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:53.186633  649678 type.go:168] "Request Body" body=""
	I1006 14:26:53.186732  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:53.187095  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:53.686774  649678 type.go:168] "Request Body" body=""
	I1006 14:26:53.686871  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:53.687310  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:54.185884  649678 type.go:168] "Request Body" body=""
	I1006 14:26:54.185972  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:54.186391  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:54.685933  649678 type.go:168] "Request Body" body=""
	I1006 14:26:54.686006  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:54.686391  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:55.186064  649678 type.go:168] "Request Body" body=""
	I1006 14:26:55.186180  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:55.186574  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:55.186642  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:55.686159  649678 type.go:168] "Request Body" body=""
	I1006 14:26:55.686263  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:55.686668  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:56.186304  649678 type.go:168] "Request Body" body=""
	I1006 14:26:56.186418  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:56.186815  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:56.686705  649678 type.go:168] "Request Body" body=""
	I1006 14:26:56.686789  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:56.687169  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:57.186778  649678 type.go:168] "Request Body" body=""
	I1006 14:26:57.186869  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:57.187240  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:57.187304  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:57.685924  649678 type.go:168] "Request Body" body=""
	I1006 14:26:57.686000  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:57.686362  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:58.185951  649678 type.go:168] "Request Body" body=""
	I1006 14:26:58.186045  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:58.186445  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:58.685995  649678 type.go:168] "Request Body" body=""
	I1006 14:26:58.686071  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:58.686437  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:59.186003  649678 type.go:168] "Request Body" body=""
	I1006 14:26:59.186190  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:59.186571  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:59.686153  649678 type.go:168] "Request Body" body=""
	I1006 14:26:59.686257  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:59.686662  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:59.686725  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:00.186605  649678 type.go:168] "Request Body" body=""
	I1006 14:27:00.186714  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:00.187091  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:00.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:27:00.686859  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:00.687243  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:01.186928  649678 type.go:168] "Request Body" body=""
	I1006 14:27:01.187012  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:01.187398  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:01.686308  649678 type.go:168] "Request Body" body=""
	I1006 14:27:01.686391  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:01.686761  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:01.686839  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:02.186358  649678 type.go:168] "Request Body" body=""
	I1006 14:27:02.186439  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:02.186809  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:02.686423  649678 type.go:168] "Request Body" body=""
	I1006 14:27:02.686509  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:02.686907  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:03.186590  649678 type.go:168] "Request Body" body=""
	I1006 14:27:03.186676  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:03.187035  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:03.686678  649678 type.go:168] "Request Body" body=""
	I1006 14:27:03.686764  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:03.687130  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:03.687245  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:04.186807  649678 type.go:168] "Request Body" body=""
	I1006 14:27:04.186891  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:04.187266  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:04.686913  649678 type.go:168] "Request Body" body=""
	I1006 14:27:04.686987  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:04.687327  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:05.185951  649678 type.go:168] "Request Body" body=""
	I1006 14:27:05.186036  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:05.186442  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:05.685992  649678 type.go:168] "Request Body" body=""
	I1006 14:27:05.686068  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:05.686436  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:06.186013  649678 type.go:168] "Request Body" body=""
	I1006 14:27:06.186094  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:06.186496  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:06.186569  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:06.686265  649678 type.go:168] "Request Body" body=""
	I1006 14:27:06.686367  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:06.686740  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:07.186336  649678 type.go:168] "Request Body" body=""
	I1006 14:27:07.186417  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:07.186760  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:07.686331  649678 type.go:168] "Request Body" body=""
	I1006 14:27:07.686437  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:07.686806  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:08.186436  649678 type.go:168] "Request Body" body=""
	I1006 14:27:08.186520  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:08.186903  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:08.186969  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:08.686610  649678 type.go:168] "Request Body" body=""
	I1006 14:27:08.686699  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:08.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:09.186699  649678 type.go:168] "Request Body" body=""
	I1006 14:27:09.186792  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:09.187140  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:09.686782  649678 type.go:168] "Request Body" body=""
	I1006 14:27:09.686873  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:09.687256  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:10.185990  649678 type.go:168] "Request Body" body=""
	I1006 14:27:10.186073  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:10.186441  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:10.686081  649678 type.go:168] "Request Body" body=""
	I1006 14:27:10.686241  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:10.686611  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:10.686681  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:11.186246  649678 type.go:168] "Request Body" body=""
	I1006 14:27:11.186326  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:11.186676  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:11.686547  649678 type.go:168] "Request Body" body=""
	I1006 14:27:11.686634  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:11.686982  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:12.186629  649678 type.go:168] "Request Body" body=""
	I1006 14:27:12.186708  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:12.187095  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:12.686714  649678 type.go:168] "Request Body" body=""
	I1006 14:27:12.686808  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:12.687182  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:12.687301  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:13.186802  649678 type.go:168] "Request Body" body=""
	I1006 14:27:13.186882  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:13.187293  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:13.686883  649678 type.go:168] "Request Body" body=""
	I1006 14:27:13.686963  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:13.687307  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:14.185879  649678 type.go:168] "Request Body" body=""
	I1006 14:27:14.185967  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:14.186371  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:14.685892  649678 type.go:168] "Request Body" body=""
	I1006 14:27:14.685968  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:14.686306  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:15.185837  649678 type.go:168] "Request Body" body=""
	I1006 14:27:15.185912  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:15.186295  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:15.186372  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:15.685893  649678 type.go:168] "Request Body" body=""
	I1006 14:27:15.685969  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:15.686294  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:16.185990  649678 type.go:168] "Request Body" body=""
	I1006 14:27:16.186081  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:16.186492  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:16.686393  649678 type.go:168] "Request Body" body=""
	I1006 14:27:16.686478  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:16.686834  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:17.186384  649678 type.go:168] "Request Body" body=""
	I1006 14:27:17.186479  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:17.186834  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:17.186910  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:17.686523  649678 type.go:168] "Request Body" body=""
	I1006 14:27:17.686606  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:17.686989  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:18.186641  649678 type.go:168] "Request Body" body=""
	I1006 14:27:18.186739  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:18.187119  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:18.686755  649678 type.go:168] "Request Body" body=""
	I1006 14:27:18.686840  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:18.687189  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:19.186887  649678 type.go:168] "Request Body" body=""
	I1006 14:27:19.186975  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:19.187444  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:19.187516  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:19.686032  649678 type.go:168] "Request Body" body=""
	I1006 14:27:19.686111  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:19.686551  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:20.186447  649678 type.go:168] "Request Body" body=""
	I1006 14:27:20.186532  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:20.186905  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:20.686572  649678 type.go:168] "Request Body" body=""
	I1006 14:27:20.686660  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:20.687016  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:21.186692  649678 type.go:168] "Request Body" body=""
	I1006 14:27:21.186778  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:21.187150  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:21.685991  649678 type.go:168] "Request Body" body=""
	I1006 14:27:21.686073  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:21.686471  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:21.686536  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:22.186060  649678 type.go:168] "Request Body" body=""
	I1006 14:27:22.186159  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:22.186562  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:22.686161  649678 type.go:168] "Request Body" body=""
	I1006 14:27:22.686270  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:22.686631  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:23.186276  649678 type.go:168] "Request Body" body=""
	I1006 14:27:23.186365  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:23.186747  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:23.686349  649678 type.go:168] "Request Body" body=""
	I1006 14:27:23.686435  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:23.686810  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:23.686876  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:24.186408  649678 type.go:168] "Request Body" body=""
	I1006 14:27:24.186497  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:24.186870  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:24.686536  649678 type.go:168] "Request Body" body=""
	I1006 14:27:24.686611  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:24.686963  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:25.186632  649678 type.go:168] "Request Body" body=""
	I1006 14:27:25.186708  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:25.187049  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:25.686802  649678 type.go:168] "Request Body" body=""
	I1006 14:27:25.686882  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:25.687264  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:25.687322  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:26.185898  649678 type.go:168] "Request Body" body=""
	I1006 14:27:26.185976  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:26.186375  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:26.686124  649678 type.go:168] "Request Body" body=""
	I1006 14:27:26.686235  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:26.686552  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:27.186223  649678 type.go:168] "Request Body" body=""
	I1006 14:27:27.186300  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:27.186673  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:27.686275  649678 type.go:168] "Request Body" body=""
	I1006 14:27:27.686364  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:27.686719  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:28.186345  649678 type.go:168] "Request Body" body=""
	I1006 14:27:28.186434  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:28.186796  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:28.186861  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:28.686407  649678 type.go:168] "Request Body" body=""
	I1006 14:27:28.686495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:28.686858  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:29.186569  649678 type.go:168] "Request Body" body=""
	I1006 14:27:29.186651  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:29.187026  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:29.686656  649678 type.go:168] "Request Body" body=""
	I1006 14:27:29.686728  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:29.687080  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:30.185993  649678 type.go:168] "Request Body" body=""
	I1006 14:27:30.186084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:30.186484  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:30.686077  649678 type.go:168] "Request Body" body=""
	I1006 14:27:30.686155  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:30.686554  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:30.686627  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:31.186175  649678 type.go:168] "Request Body" body=""
	I1006 14:27:31.186286  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:31.186680  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:31.686528  649678 type.go:168] "Request Body" body=""
	I1006 14:27:31.686627  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:31.687001  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:32.186675  649678 type.go:168] "Request Body" body=""
	I1006 14:27:32.186758  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:32.187124  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:32.686856  649678 type.go:168] "Request Body" body=""
	I1006 14:27:32.686942  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:32.687307  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:32.687374  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:33.185899  649678 type.go:168] "Request Body" body=""
	I1006 14:27:33.185977  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:33.186402  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:33.685994  649678 type.go:168] "Request Body" body=""
	I1006 14:27:33.686074  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:33.686482  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:34.186077  649678 type.go:168] "Request Body" body=""
	I1006 14:27:34.186156  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:34.186558  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:34.686141  649678 type.go:168] "Request Body" body=""
	I1006 14:27:34.686238  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:34.686596  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:35.186192  649678 type.go:168] "Request Body" body=""
	I1006 14:27:35.186297  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:35.186668  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:35.186738  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:35.686376  649678 type.go:168] "Request Body" body=""
	I1006 14:27:35.686471  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:35.686827  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:36.186471  649678 type.go:168] "Request Body" body=""
	I1006 14:27:36.186549  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:36.186909  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:36.686773  649678 type.go:168] "Request Body" body=""
	I1006 14:27:36.686851  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:36.687225  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:37.186866  649678 type.go:168] "Request Body" body=""
	I1006 14:27:37.186943  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:37.187324  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:37.187402  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:37.685875  649678 type.go:168] "Request Body" body=""
	I1006 14:27:37.685951  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:37.686318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:38.185935  649678 type.go:168] "Request Body" body=""
	I1006 14:27:38.186022  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:38.186413  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:38.685990  649678 type.go:168] "Request Body" body=""
	I1006 14:27:38.686065  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:38.686446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:39.186040  649678 type.go:168] "Request Body" body=""
	I1006 14:27:39.186119  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:39.186517  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:39.686067  649678 type.go:168] "Request Body" body=""
	I1006 14:27:39.686152  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:39.686509  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:39.686570  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:40.186335  649678 type.go:168] "Request Body" body=""
	I1006 14:27:40.186421  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:40.186798  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:40.686383  649678 type.go:168] "Request Body" body=""
	I1006 14:27:40.686477  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:40.686843  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:41.186496  649678 type.go:168] "Request Body" body=""
	I1006 14:27:41.186589  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:41.186955  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:41.686485  649678 type.go:168] "Request Body" body=""
	I1006 14:27:41.686563  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:41.686938  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:41.687005  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:42.186439  649678 type.go:168] "Request Body" body=""
	I1006 14:27:42.186523  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:42.186890  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:42.686663  649678 type.go:168] "Request Body" body=""
	I1006 14:27:42.686739  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:42.687098  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:43.186774  649678 type.go:168] "Request Body" body=""
	I1006 14:27:43.186856  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:43.187251  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:43.686855  649678 type.go:168] "Request Body" body=""
	I1006 14:27:43.686937  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:43.687333  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:43.687401  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:44.185915  649678 type.go:168] "Request Body" body=""
	I1006 14:27:44.185993  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:44.186423  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:44.685989  649678 type.go:168] "Request Body" body=""
	I1006 14:27:44.686091  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:44.686498  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:45.186085  649678 type.go:168] "Request Body" body=""
	I1006 14:27:45.186165  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:45.186565  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:45.686116  649678 type.go:168] "Request Body" body=""
	I1006 14:27:45.686239  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:45.686593  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:46.186172  649678 type.go:168] "Request Body" body=""
	I1006 14:27:46.186282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:46.186664  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:46.186734  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:46.686523  649678 type.go:168] "Request Body" body=""
	I1006 14:27:46.686604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:46.686968  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:47.186636  649678 type.go:168] "Request Body" body=""
	I1006 14:27:47.186712  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:47.187063  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:47.686695  649678 type.go:168] "Request Body" body=""
	I1006 14:27:47.686772  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:47.687119  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:48.186827  649678 type.go:168] "Request Body" body=""
	I1006 14:27:48.186919  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:48.187317  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:48.187383  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:48.685929  649678 type.go:168] "Request Body" body=""
	I1006 14:27:48.686009  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:48.686363  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:49.185988  649678 type.go:168] "Request Body" body=""
	I1006 14:27:49.186066  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:49.186471  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:49.686018  649678 type.go:168] "Request Body" body=""
	I1006 14:27:49.686094  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:49.686456  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:50.186006  649678 node_ready.go:38] duration metric: took 6m0.000261558s for node "functional-135520" to be "Ready" ...
	I1006 14:27:50.189087  649678 out.go:203] 
	W1006 14:27:50.190513  649678 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1006 14:27:50.190545  649678 out.go:285] * 
	* 
	W1006 14:27:50.192353  649678 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:27:50.193614  649678 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-135520 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m4.412643405s for "functional-135520" cluster.
I1006 14:27:50.690056  629719 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-135520
helpers_test.go:243: (dbg) docker inspect functional-135520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	        "Created": "2025-10-06T14:13:32.283355011Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:13:32.318096257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hosts",
	        "LogPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20-json.log",
	        "Name": "/functional-135520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-135520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-135520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	                "LowerDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-135520",
	                "Source": "/var/lib/docker/volumes/functional-135520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-135520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-135520",
	                "name.minikube.sigs.k8s.io": "functional-135520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6368ffca3e5840f94a34614c511d9f0a0a4ca0d05de4fe1f94c8bfdc332f1a62",
	            "SandboxKey": "/var/run/docker/netns/6368ffca3e58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-135520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d1:94:25:38:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f712be59dd18dac98bed5f234c9f77a39e85277143d6f46285adcd3b0185d552",
	                    "EndpointID": "b816964b653b1b5116e3262dfdc87af272931013ef5b9e2714c9ff7357118a6f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-135520",
	                        "3dd9a226ea42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520: exit status 2 (315.182ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-135520 logs -n 25: (1.030421358s)
helpers_test.go:260: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-040731                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-040731   │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │ 06 Oct 25 13:56 UTC │
	│ start   │ --download-only -p download-docker-650660 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-650660 │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ delete  │ -p download-docker-650660                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-650660 │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │ 06 Oct 25 13:56 UTC │
	│ start   │ --download-only -p binary-mirror-501421 --alsologtostderr --binary-mirror http://127.0.0.1:36469 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-501421   │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ delete  │ -p binary-mirror-501421                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-501421   │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │ 06 Oct 25 13:56 UTC │
	│ addons  │ enable dashboard -p addons-834039                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-834039          │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-834039                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-834039          │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ start   │ -p addons-834039 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-834039          │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ delete  │ -p addons-834039                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-834039          │ jenkins │ v1.37.0 │ 06 Oct 25 14:04 UTC │ 06 Oct 25 14:04 UTC │
	│ start   │ -p nospam-500584 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-500584 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:04 UTC │                     │
	│ start   │ nospam-500584 --log_dir /tmp/nospam-500584 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ start   │ nospam-500584 --log_dir /tmp/nospam-500584 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ start   │ nospam-500584 --log_dir /tmp/nospam-500584 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ pause   │ nospam-500584 --log_dir /tmp/nospam-500584 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ pause   │ nospam-500584 --log_dir /tmp/nospam-500584 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ pause   │ nospam-500584 --log_dir /tmp/nospam-500584 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ delete  │ -p nospam-500584                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ start   │ -p functional-135520 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-135520      │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ start   │ -p functional-135520 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-135520      │ jenkins │ v1.37.0 │ 06 Oct 25 14:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:21:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:21:46.323016  649678 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:21:46.323271  649678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:21:46.323279  649678 out.go:374] Setting ErrFile to fd 2...
	I1006 14:21:46.323283  649678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:21:46.323475  649678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:21:46.323908  649678 out.go:368] Setting JSON to false
	I1006 14:21:46.324826  649678 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18242,"bootTime":1759742264,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:21:46.324926  649678 start.go:140] virtualization: kvm guest
	I1006 14:21:46.326925  649678 out.go:179] * [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:21:46.327942  649678 notify.go:220] Checking for updates...
	I1006 14:21:46.327965  649678 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:21:46.329155  649678 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:21:46.330229  649678 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:46.331298  649678 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:21:46.332353  649678 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:21:46.333341  649678 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:21:46.334666  649678 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:21:46.334805  649678 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:21:46.359710  649678 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:21:46.359861  649678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:21:46.415678  649678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:21:46.405264016 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:21:46.415787  649678 docker.go:318] overlay module found
	I1006 14:21:46.417155  649678 out.go:179] * Using the docker driver based on existing profile
	I1006 14:21:46.418292  649678 start.go:304] selected driver: docker
	I1006 14:21:46.418308  649678 start.go:924] validating driver "docker" against &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:46.418380  649678 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:21:46.418468  649678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:21:46.473903  649678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:21:46.464043789 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:21:46.474648  649678 cni.go:84] Creating CNI manager for ""
	I1006 14:21:46.474719  649678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:21:46.474770  649678 start.go:348] cluster config:
	{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:46.476311  649678 out.go:179] * Starting "functional-135520" primary control-plane node in "functional-135520" cluster
	I1006 14:21:46.477235  649678 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:21:46.478074  649678 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:21:46.479119  649678 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:21:46.479164  649678 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:21:46.479185  649678 cache.go:58] Caching tarball of preloaded images
	I1006 14:21:46.479228  649678 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:21:46.479294  649678 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:21:46.479309  649678 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:21:46.479413  649678 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/config.json ...
	I1006 14:21:46.499695  649678 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:21:46.499723  649678 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:21:46.499744  649678 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:21:46.499779  649678 start.go:360] acquireMachinesLock for functional-135520: {Name:mk634323c4619e77647ac9d9aaca492e399526ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:21:46.499864  649678 start.go:364] duration metric: took 47.895µs to acquireMachinesLock for "functional-135520"
	I1006 14:21:46.499886  649678 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:21:46.499892  649678 fix.go:54] fixHost starting: 
	I1006 14:21:46.500243  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:46.517601  649678 fix.go:112] recreateIfNeeded on functional-135520: state=Running err=<nil>
	W1006 14:21:46.517640  649678 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:21:46.519112  649678 out.go:252] * Updating the running docker "functional-135520" container ...
	I1006 14:21:46.519143  649678 machine.go:93] provisionDockerMachine start ...
	I1006 14:21:46.519223  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:46.537175  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:46.537424  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:46.537438  649678 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:21:46.682374  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:21:46.682420  649678 ubuntu.go:182] provisioning hostname "functional-135520"
	I1006 14:21:46.682484  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:46.700103  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:46.700382  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:46.700401  649678 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-135520 && echo "functional-135520" | sudo tee /etc/hostname
	I1006 14:21:46.853845  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:21:46.853924  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:46.872015  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:46.872265  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:46.872284  649678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-135520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-135520/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-135520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:21:47.017154  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:21:47.017184  649678 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:21:47.017239  649678 ubuntu.go:190] setting up certificates
	I1006 14:21:47.017253  649678 provision.go:84] configureAuth start
	I1006 14:21:47.017340  649678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:21:47.035104  649678 provision.go:143] copyHostCerts
	I1006 14:21:47.035140  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:21:47.035175  649678 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:21:47.035198  649678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:21:47.035336  649678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:21:47.035448  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:21:47.035468  649678 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:21:47.035478  649678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:21:47.035513  649678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:21:47.035575  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:21:47.035593  649678 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:21:47.035599  649678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:21:47.035623  649678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:21:47.035688  649678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.functional-135520 san=[127.0.0.1 192.168.49.2 functional-135520 localhost minikube]
	I1006 14:21:47.332166  649678 provision.go:177] copyRemoteCerts
	I1006 14:21:47.332258  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:21:47.332304  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.351185  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:47.453191  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:21:47.453264  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:21:47.470840  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:21:47.470907  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 14:21:47.487466  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:21:47.487518  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:21:47.504343  649678 provision.go:87] duration metric: took 487.07429ms to configureAuth
	I1006 14:21:47.504374  649678 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:21:47.504541  649678 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:21:47.504639  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.523029  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:47.523280  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:47.523307  649678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:21:47.788227  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:21:47.788259  649678 machine.go:96] duration metric: took 1.269106143s to provisionDockerMachine
	I1006 14:21:47.788275  649678 start.go:293] postStartSetup for "functional-135520" (driver="docker")
	I1006 14:21:47.788290  649678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:21:47.788372  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:21:47.788428  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.805850  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:47.908894  649678 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:21:47.912773  649678 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1006 14:21:47.912795  649678 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1006 14:21:47.912801  649678 command_runner.go:130] > VERSION_ID="12"
	I1006 14:21:47.912807  649678 command_runner.go:130] > VERSION="12 (bookworm)"
	I1006 14:21:47.912813  649678 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1006 14:21:47.912819  649678 command_runner.go:130] > ID=debian
	I1006 14:21:47.912827  649678 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1006 14:21:47.912834  649678 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1006 14:21:47.912843  649678 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1006 14:21:47.912900  649678 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:21:47.912919  649678 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:21:47.912929  649678 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:21:47.912988  649678 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:21:47.913065  649678 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:21:47.913078  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:21:47.913143  649678 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> hosts in /etc/test/nested/copy/629719
	I1006 14:21:47.913151  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> /etc/test/nested/copy/629719/hosts
	I1006 14:21:47.913182  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/629719
	I1006 14:21:47.920839  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:21:47.937786  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts --> /etc/test/nested/copy/629719/hosts (40 bytes)
	I1006 14:21:47.954760  649678 start.go:296] duration metric: took 166.455369ms for postStartSetup
	I1006 14:21:47.954834  649678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:21:47.954870  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.972368  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:48.072535  649678 command_runner.go:130] > 38%
	I1006 14:21:48.072624  649678 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:21:48.077267  649678 command_runner.go:130] > 182G
	I1006 14:21:48.077574  649678 fix.go:56] duration metric: took 1.577678011s for fixHost
	I1006 14:21:48.077595  649678 start.go:83] releasing machines lock for "functional-135520", held for 1.577717734s
	I1006 14:21:48.077675  649678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:21:48.095670  649678 ssh_runner.go:195] Run: cat /version.json
	I1006 14:21:48.095722  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:48.095754  649678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:21:48.095827  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:48.113591  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:48.115313  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:48.268773  649678 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1006 14:21:48.268839  649678 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1006 14:21:48.268953  649678 ssh_runner.go:195] Run: systemctl --version
	I1006 14:21:48.275683  649678 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1006 14:21:48.275717  649678 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1006 14:21:48.275778  649678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:21:48.311695  649678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1006 14:21:48.316662  649678 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1006 14:21:48.316719  649678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:21:48.316778  649678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:21:48.324682  649678 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:21:48.324705  649678 start.go:495] detecting cgroup driver to use...
	I1006 14:21:48.324740  649678 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:21:48.324780  649678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:21:48.339343  649678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:21:48.350971  649678 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:21:48.351020  649678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:21:48.364377  649678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:21:48.375810  649678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:21:48.466998  649678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:21:48.555437  649678 docker.go:234] disabling docker service ...
	I1006 14:21:48.555507  649678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:21:48.569642  649678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:21:48.581371  649678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:21:48.660341  649678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:21:48.745051  649678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:21:48.757689  649678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:21:48.770829  649678 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1006 14:21:48.771733  649678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:21:48.771806  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.781084  649678 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:21:48.781164  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.790125  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.798751  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.807637  649678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:21:48.815986  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.824650  649678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.832873  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.841368  649678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:21:48.847999  649678 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1006 14:21:48.848646  649678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:21:48.855735  649678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:48.941247  649678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:21:49.054732  649678 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:21:49.054813  649678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:21:49.059042  649678 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1006 14:21:49.059070  649678 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1006 14:21:49.059079  649678 command_runner.go:130] > Device: 0,59	Inode: 3845        Links: 1
	I1006 14:21:49.059086  649678 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 14:21:49.059091  649678 command_runner.go:130] > Access: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059104  649678 command_runner.go:130] > Modify: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059109  649678 command_runner.go:130] > Change: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059113  649678 command_runner.go:130] >  Birth: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059133  649678 start.go:563] Will wait 60s for crictl version
	I1006 14:21:49.059181  649678 ssh_runner.go:195] Run: which crictl
	I1006 14:21:49.062689  649678 command_runner.go:130] > /usr/local/bin/crictl
	I1006 14:21:49.062764  649678 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:21:49.086605  649678 command_runner.go:130] > Version:  0.1.0
	I1006 14:21:49.086623  649678 command_runner.go:130] > RuntimeName:  cri-o
	I1006 14:21:49.086627  649678 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1006 14:21:49.086632  649678 command_runner.go:130] > RuntimeApiVersion:  v1
	I1006 14:21:49.088423  649678 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:21:49.088499  649678 ssh_runner.go:195] Run: crio --version
	I1006 14:21:49.118625  649678 command_runner.go:130] > crio version 1.34.1
	I1006 14:21:49.118652  649678 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1006 14:21:49.118659  649678 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1006 14:21:49.118666  649678 command_runner.go:130] >    GitTreeState:   dirty
	I1006 14:21:49.118672  649678 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1006 14:21:49.118678  649678 command_runner.go:130] >    GoVersion:      go1.24.6
	I1006 14:21:49.118683  649678 command_runner.go:130] >    Compiler:       gc
	I1006 14:21:49.118692  649678 command_runner.go:130] >    Platform:       linux/amd64
	I1006 14:21:49.118700  649678 command_runner.go:130] >    Linkmode:       static
	I1006 14:21:49.118708  649678 command_runner.go:130] >    BuildTags:
	I1006 14:21:49.118718  649678 command_runner.go:130] >      static
	I1006 14:21:49.118724  649678 command_runner.go:130] >      netgo
	I1006 14:21:49.118729  649678 command_runner.go:130] >      osusergo
	I1006 14:21:49.118739  649678 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1006 14:21:49.118745  649678 command_runner.go:130] >      seccomp
	I1006 14:21:49.118749  649678 command_runner.go:130] >      apparmor
	I1006 14:21:49.118753  649678 command_runner.go:130] >      selinux
	I1006 14:21:49.118757  649678 command_runner.go:130] >    LDFlags:          unknown
	I1006 14:21:49.118781  649678 command_runner.go:130] >    SeccompEnabled:   true
	I1006 14:21:49.118789  649678 command_runner.go:130] >    AppArmorEnabled:  false
	I1006 14:21:49.118869  649678 ssh_runner.go:195] Run: crio --version
	I1006 14:21:49.147173  649678 command_runner.go:130] > crio version 1.34.1
	I1006 14:21:49.147230  649678 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1006 14:21:49.147241  649678 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1006 14:21:49.147249  649678 command_runner.go:130] >    GitTreeState:   dirty
	I1006 14:21:49.147257  649678 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1006 14:21:49.147263  649678 command_runner.go:130] >    GoVersion:      go1.24.6
	I1006 14:21:49.147267  649678 command_runner.go:130] >    Compiler:       gc
	I1006 14:21:49.147283  649678 command_runner.go:130] >    Platform:       linux/amd64
	I1006 14:21:49.147292  649678 command_runner.go:130] >    Linkmode:       static
	I1006 14:21:49.147296  649678 command_runner.go:130] >    BuildTags:
	I1006 14:21:49.147299  649678 command_runner.go:130] >      static
	I1006 14:21:49.147303  649678 command_runner.go:130] >      netgo
	I1006 14:21:49.147309  649678 command_runner.go:130] >      osusergo
	I1006 14:21:49.147313  649678 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1006 14:21:49.147320  649678 command_runner.go:130] >      seccomp
	I1006 14:21:49.147324  649678 command_runner.go:130] >      apparmor
	I1006 14:21:49.147330  649678 command_runner.go:130] >      selinux
	I1006 14:21:49.147334  649678 command_runner.go:130] >    LDFlags:          unknown
	I1006 14:21:49.147340  649678 command_runner.go:130] >    SeccompEnabled:   true
	I1006 14:21:49.147443  649678 command_runner.go:130] >    AppArmorEnabled:  false
	I1006 14:21:49.149760  649678 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:21:49.150923  649678 cli_runner.go:164] Run: docker network inspect functional-135520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:21:49.168305  649678 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:21:49.172524  649678 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1006 14:21:49.172624  649678 kubeadm.go:883] updating cluster {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:21:49.172735  649678 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:21:49.172777  649678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:21:49.203555  649678 command_runner.go:130] > {
	I1006 14:21:49.203573  649678 command_runner.go:130] >   "images":  [
	I1006 14:21:49.203577  649678 command_runner.go:130] >     {
	I1006 14:21:49.203585  649678 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1006 14:21:49.203589  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203596  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1006 14:21:49.203599  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203603  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203613  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1006 14:21:49.203619  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1006 14:21:49.203623  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203628  649678 command_runner.go:130] >       "size":  "109379124",
	I1006 14:21:49.203634  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203641  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203647  649678 command_runner.go:130] >     },
	I1006 14:21:49.203650  649678 command_runner.go:130] >     {
	I1006 14:21:49.203656  649678 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1006 14:21:49.203660  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203665  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1006 14:21:49.203671  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203676  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203684  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1006 14:21:49.203694  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1006 14:21:49.203697  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203701  649678 command_runner.go:130] >       "size":  "31470524",
	I1006 14:21:49.203705  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203716  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203722  649678 command_runner.go:130] >     },
	I1006 14:21:49.203725  649678 command_runner.go:130] >     {
	I1006 14:21:49.203731  649678 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1006 14:21:49.203737  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203742  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1006 14:21:49.203748  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203752  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203759  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1006 14:21:49.203768  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1006 14:21:49.203771  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203775  649678 command_runner.go:130] >       "size":  "76103547",
	I1006 14:21:49.203779  649678 command_runner.go:130] >       "username":  "nonroot",
	I1006 14:21:49.203783  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203785  649678 command_runner.go:130] >     },
	I1006 14:21:49.203789  649678 command_runner.go:130] >     {
	I1006 14:21:49.203794  649678 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1006 14:21:49.203799  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203804  649678 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1006 14:21:49.203807  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203811  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203817  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1006 14:21:49.203826  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1006 14:21:49.203829  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203836  649678 command_runner.go:130] >       "size":  "195976448",
	I1006 14:21:49.203840  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.203844  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.203847  649678 command_runner.go:130] >       },
	I1006 14:21:49.203855  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203861  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203864  649678 command_runner.go:130] >     },
	I1006 14:21:49.203867  649678 command_runner.go:130] >     {
	I1006 14:21:49.203873  649678 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1006 14:21:49.203879  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203884  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1006 14:21:49.203887  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203891  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203901  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1006 14:21:49.203907  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1006 14:21:49.203913  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203916  649678 command_runner.go:130] >       "size":  "89046001",
	I1006 14:21:49.203920  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.203925  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.203928  649678 command_runner.go:130] >       },
	I1006 14:21:49.203931  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203935  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203938  649678 command_runner.go:130] >     },
	I1006 14:21:49.203941  649678 command_runner.go:130] >     {
	I1006 14:21:49.203947  649678 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1006 14:21:49.203953  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203958  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1006 14:21:49.203961  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203965  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203972  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1006 14:21:49.203981  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1006 14:21:49.203984  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203988  649678 command_runner.go:130] >       "size":  "76004181",
	I1006 14:21:49.203992  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.203998  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.204001  649678 command_runner.go:130] >       },
	I1006 14:21:49.204005  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204011  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.204014  649678 command_runner.go:130] >     },
	I1006 14:21:49.204019  649678 command_runner.go:130] >     {
	I1006 14:21:49.204024  649678 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1006 14:21:49.204028  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.204033  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1006 14:21:49.204036  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204042  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.204055  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1006 14:21:49.204067  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1006 14:21:49.204073  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204078  649678 command_runner.go:130] >       "size":  "73138073",
	I1006 14:21:49.204081  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204085  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.204089  649678 command_runner.go:130] >     },
	I1006 14:21:49.204092  649678 command_runner.go:130] >     {
	I1006 14:21:49.204097  649678 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1006 14:21:49.204104  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.204108  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1006 14:21:49.204112  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204116  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.204123  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1006 14:21:49.204153  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1006 14:21:49.204160  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204164  649678 command_runner.go:130] >       "size":  "53844823",
	I1006 14:21:49.204167  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.204170  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.204174  649678 command_runner.go:130] >       },
	I1006 14:21:49.204178  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204183  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.204188  649678 command_runner.go:130] >     },
	I1006 14:21:49.204191  649678 command_runner.go:130] >     {
	I1006 14:21:49.204197  649678 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1006 14:21:49.204222  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.204230  649678 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1006 14:21:49.204237  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204243  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.204253  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1006 14:21:49.204260  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1006 14:21:49.204266  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204269  649678 command_runner.go:130] >       "size":  "742092",
	I1006 14:21:49.204273  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.204277  649678 command_runner.go:130] >         "value":  "65535"
	I1006 14:21:49.204280  649678 command_runner.go:130] >       },
	I1006 14:21:49.204284  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204288  649678 command_runner.go:130] >       "pinned":  true
	I1006 14:21:49.204291  649678 command_runner.go:130] >     }
	I1006 14:21:49.204294  649678 command_runner.go:130] >   ]
	I1006 14:21:49.204299  649678 command_runner.go:130] > }
	I1006 14:21:49.205550  649678 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:21:49.205570  649678 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:21:49.205618  649678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:21:49.229611  649678 command_runner.go:130] > {
	I1006 14:21:49.229630  649678 command_runner.go:130] >   "images":  [
	I1006 14:21:49.229637  649678 command_runner.go:130] >     {
	I1006 14:21:49.229647  649678 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1006 14:21:49.229656  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.229664  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1006 14:21:49.229669  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229675  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.229690  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1006 14:21:49.229706  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1006 14:21:49.229712  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229738  649678 command_runner.go:130] >       "size":  "109379124",
	I1006 14:21:49.229748  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.229755  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.229761  649678 command_runner.go:130] >     },
	I1006 14:21:49.229770  649678 command_runner.go:130] >     {
	I1006 14:21:49.229780  649678 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1006 14:21:49.229789  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.229799  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1006 14:21:49.229807  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229814  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.229830  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1006 14:21:49.229846  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1006 14:21:49.229854  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229863  649678 command_runner.go:130] >       "size":  "31470524",
	I1006 14:21:49.229872  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.229894  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.229902  649678 command_runner.go:130] >     },
	I1006 14:21:49.229907  649678 command_runner.go:130] >     {
	I1006 14:21:49.229918  649678 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1006 14:21:49.229927  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.229936  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1006 14:21:49.229943  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229951  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.229965  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1006 14:21:49.229980  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1006 14:21:49.229999  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230007  649678 command_runner.go:130] >       "size":  "76103547",
	I1006 14:21:49.230016  649678 command_runner.go:130] >       "username":  "nonroot",
	I1006 14:21:49.230023  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230031  649678 command_runner.go:130] >     },
	I1006 14:21:49.230036  649678 command_runner.go:130] >     {
	I1006 14:21:49.230050  649678 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1006 14:21:49.230059  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230068  649678 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1006 14:21:49.230076  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230083  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230097  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1006 14:21:49.230112  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1006 14:21:49.230119  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230127  649678 command_runner.go:130] >       "size":  "195976448",
	I1006 14:21:49.230135  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230143  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230152  649678 command_runner.go:130] >       },
	I1006 14:21:49.230165  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230175  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230181  649678 command_runner.go:130] >     },
	I1006 14:21:49.230189  649678 command_runner.go:130] >     {
	I1006 14:21:49.230220  649678 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1006 14:21:49.230239  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230249  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1006 14:21:49.230257  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230264  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230279  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1006 14:21:49.230306  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1006 14:21:49.230314  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230321  649678 command_runner.go:130] >       "size":  "89046001",
	I1006 14:21:49.230329  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230336  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230345  649678 command_runner.go:130] >       },
	I1006 14:21:49.230352  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230361  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230367  649678 command_runner.go:130] >     },
	I1006 14:21:49.230375  649678 command_runner.go:130] >     {
	I1006 14:21:49.230386  649678 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1006 14:21:49.230395  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230406  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1006 14:21:49.230414  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230421  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230436  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1006 14:21:49.230451  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1006 14:21:49.230460  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230467  649678 command_runner.go:130] >       "size":  "76004181",
	I1006 14:21:49.230484  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230493  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230500  649678 command_runner.go:130] >       },
	I1006 14:21:49.230507  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230516  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230523  649678 command_runner.go:130] >     },
	I1006 14:21:49.230529  649678 command_runner.go:130] >     {
	I1006 14:21:49.230542  649678 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1006 14:21:49.230549  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230568  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1006 14:21:49.230576  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230583  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230599  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1006 14:21:49.230614  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1006 14:21:49.230621  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230628  649678 command_runner.go:130] >       "size":  "73138073",
	I1006 14:21:49.230637  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230645  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230653  649678 command_runner.go:130] >     },
	I1006 14:21:49.230658  649678 command_runner.go:130] >     {
	I1006 14:21:49.230665  649678 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1006 14:21:49.230670  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230679  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1006 14:21:49.230687  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230693  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230706  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1006 14:21:49.230734  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1006 14:21:49.230745  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230751  649678 command_runner.go:130] >       "size":  "53844823",
	I1006 14:21:49.230758  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230767  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230773  649678 command_runner.go:130] >       },
	I1006 14:21:49.230783  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230791  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230799  649678 command_runner.go:130] >     },
	I1006 14:21:49.230805  649678 command_runner.go:130] >     {
	I1006 14:21:49.230819  649678 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1006 14:21:49.230828  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230837  649678 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1006 14:21:49.230845  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230852  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230865  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1006 14:21:49.230878  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1006 14:21:49.230887  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230894  649678 command_runner.go:130] >       "size":  "742092",
	I1006 14:21:49.230902  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230909  649678 command_runner.go:130] >         "value":  "65535"
	I1006 14:21:49.230918  649678 command_runner.go:130] >       },
	I1006 14:21:49.230924  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230934  649678 command_runner.go:130] >       "pinned":  true
	I1006 14:21:49.230940  649678 command_runner.go:130] >     }
	I1006 14:21:49.230948  649678 command_runner.go:130] >   ]
	I1006 14:21:49.230953  649678 command_runner.go:130] > }
	I1006 14:21:49.231845  649678 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:21:49.231866  649678 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:21:49.231873  649678 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1006 14:21:49.232021  649678 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-135520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:21:49.232106  649678 ssh_runner.go:195] Run: crio config
	I1006 14:21:49.273258  649678 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1006 14:21:49.273298  649678 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1006 14:21:49.273306  649678 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1006 14:21:49.273309  649678 command_runner.go:130] > #
	I1006 14:21:49.273321  649678 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1006 14:21:49.273332  649678 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1006 14:21:49.273343  649678 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1006 14:21:49.273357  649678 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1006 14:21:49.273367  649678 command_runner.go:130] > # reload'.
	I1006 14:21:49.273377  649678 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1006 14:21:49.273389  649678 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1006 14:21:49.273403  649678 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1006 14:21:49.273413  649678 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1006 14:21:49.273423  649678 command_runner.go:130] > [crio]
	I1006 14:21:49.273433  649678 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1006 14:21:49.273446  649678 command_runner.go:130] > # containers images, in this directory.
	I1006 14:21:49.273471  649678 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1006 14:21:49.273486  649678 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1006 14:21:49.273494  649678 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1006 14:21:49.273512  649678 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1006 14:21:49.273525  649678 command_runner.go:130] > # imagestore = ""
	I1006 14:21:49.273535  649678 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1006 14:21:49.273548  649678 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1006 14:21:49.273561  649678 command_runner.go:130] > # storage_driver = "overlay"
	I1006 14:21:49.273574  649678 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1006 14:21:49.273591  649678 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1006 14:21:49.273599  649678 command_runner.go:130] > # storage_option = [
	I1006 14:21:49.273613  649678 command_runner.go:130] > # ]
	I1006 14:21:49.273623  649678 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1006 14:21:49.273635  649678 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1006 14:21:49.273642  649678 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1006 14:21:49.273652  649678 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1006 14:21:49.273664  649678 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1006 14:21:49.273678  649678 command_runner.go:130] > # always happen on a node reboot
	I1006 14:21:49.273690  649678 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1006 14:21:49.273712  649678 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1006 14:21:49.273725  649678 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1006 14:21:49.273743  649678 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1006 14:21:49.273751  649678 command_runner.go:130] > # version_file_persist = ""
	I1006 14:21:49.273764  649678 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1006 14:21:49.273781  649678 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1006 14:21:49.273792  649678 command_runner.go:130] > # internal_wipe = true
	I1006 14:21:49.273806  649678 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1006 14:21:49.273819  649678 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1006 14:21:49.273829  649678 command_runner.go:130] > # internal_repair = true
	I1006 14:21:49.273842  649678 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1006 14:21:49.273856  649678 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1006 14:21:49.273870  649678 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1006 14:21:49.273880  649678 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1006 14:21:49.273894  649678 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1006 14:21:49.273901  649678 command_runner.go:130] > [crio.api]
	I1006 14:21:49.273915  649678 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1006 14:21:49.273926  649678 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1006 14:21:49.273935  649678 command_runner.go:130] > # IP address on which the stream server will listen.
	I1006 14:21:49.273947  649678 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1006 14:21:49.273963  649678 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1006 14:21:49.273975  649678 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1006 14:21:49.273987  649678 command_runner.go:130] > # stream_port = "0"
	I1006 14:21:49.274002  649678 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1006 14:21:49.274013  649678 command_runner.go:130] > # stream_enable_tls = false
	I1006 14:21:49.274023  649678 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1006 14:21:49.274035  649678 command_runner.go:130] > # stream_idle_timeout = ""
	I1006 14:21:49.274045  649678 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1006 14:21:49.274059  649678 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1006 14:21:49.274068  649678 command_runner.go:130] > # stream_tls_cert = ""
	I1006 14:21:49.274083  649678 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1006 14:21:49.274109  649678 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1006 14:21:49.274132  649678 command_runner.go:130] > # stream_tls_key = ""
	I1006 14:21:49.274143  649678 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1006 14:21:49.274153  649678 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1006 14:21:49.274162  649678 command_runner.go:130] > # automatically pick up the changes.
	I1006 14:21:49.274173  649678 command_runner.go:130] > # stream_tls_ca = ""
	I1006 14:21:49.274218  649678 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1006 14:21:49.274233  649678 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1006 14:21:49.274245  649678 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1006 14:21:49.274257  649678 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1006 14:21:49.274268  649678 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1006 14:21:49.274281  649678 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1006 14:21:49.274293  649678 command_runner.go:130] > [crio.runtime]
	I1006 14:21:49.274303  649678 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1006 14:21:49.274315  649678 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1006 14:21:49.274325  649678 command_runner.go:130] > # "nofile=1024:2048"
	I1006 14:21:49.274336  649678 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1006 14:21:49.274347  649678 command_runner.go:130] > # default_ulimits = [
	I1006 14:21:49.274353  649678 command_runner.go:130] > # ]
	I1006 14:21:49.274363  649678 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1006 14:21:49.274374  649678 command_runner.go:130] > # no_pivot = false
	I1006 14:21:49.274384  649678 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1006 14:21:49.274399  649678 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1006 14:21:49.274410  649678 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1006 14:21:49.274425  649678 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1006 14:21:49.274437  649678 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1006 14:21:49.274453  649678 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1006 14:21:49.274464  649678 command_runner.go:130] > # conmon = ""
	I1006 14:21:49.274473  649678 command_runner.go:130] > # Cgroup setting for conmon
	I1006 14:21:49.274487  649678 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1006 14:21:49.274498  649678 command_runner.go:130] > conmon_cgroup = "pod"
	I1006 14:21:49.274508  649678 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1006 14:21:49.274520  649678 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1006 14:21:49.274533  649678 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1006 14:21:49.274545  649678 command_runner.go:130] > # conmon_env = [
	I1006 14:21:49.274559  649678 command_runner.go:130] > # ]
	I1006 14:21:49.274566  649678 command_runner.go:130] > # Additional environment variables to set for all the
	I1006 14:21:49.274574  649678 command_runner.go:130] > # containers. These are overridden if set in the
	I1006 14:21:49.274583  649678 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1006 14:21:49.274593  649678 command_runner.go:130] > # default_env = [
	I1006 14:21:49.274599  649678 command_runner.go:130] > # ]
	I1006 14:21:49.274610  649678 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1006 14:21:49.274625  649678 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1006 14:21:49.274633  649678 command_runner.go:130] > # selinux = false
	I1006 14:21:49.274646  649678 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1006 14:21:49.274658  649678 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1006 14:21:49.274677  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274687  649678 command_runner.go:130] > # seccomp_profile = ""
	I1006 14:21:49.274698  649678 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1006 14:21:49.274707  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274715  649678 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1006 14:21:49.274733  649678 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1006 14:21:49.274744  649678 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1006 14:21:49.274754  649678 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1006 14:21:49.274768  649678 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1006 14:21:49.274776  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274784  649678 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1006 14:21:49.274794  649678 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1006 14:21:49.274802  649678 command_runner.go:130] > # the cgroup blockio controller.
	I1006 14:21:49.274809  649678 command_runner.go:130] > # blockio_config_file = ""
	I1006 14:21:49.274820  649678 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1006 14:21:49.274828  649678 command_runner.go:130] > # blockio parameters.
	I1006 14:21:49.274840  649678 command_runner.go:130] > # blockio_reload = false
	I1006 14:21:49.274849  649678 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1006 14:21:49.274856  649678 command_runner.go:130] > # irqbalance daemon.
	I1006 14:21:49.274870  649678 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1006 14:21:49.274886  649678 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1006 14:21:49.274901  649678 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1006 14:21:49.274915  649678 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1006 14:21:49.274927  649678 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1006 14:21:49.274933  649678 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1006 14:21:49.274941  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274945  649678 command_runner.go:130] > # rdt_config_file = ""
	I1006 14:21:49.274950  649678 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1006 14:21:49.274955  649678 command_runner.go:130] > # cgroup_manager = "systemd"
	I1006 14:21:49.274962  649678 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1006 14:21:49.274968  649678 command_runner.go:130] > # separate_pull_cgroup = ""
	I1006 14:21:49.274974  649678 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1006 14:21:49.274982  649678 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1006 14:21:49.274986  649678 command_runner.go:130] > # will be added.
	I1006 14:21:49.274991  649678 command_runner.go:130] > # default_capabilities = [
	I1006 14:21:49.274994  649678 command_runner.go:130] > # 	"CHOWN",
	I1006 14:21:49.274998  649678 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1006 14:21:49.275001  649678 command_runner.go:130] > # 	"FSETID",
	I1006 14:21:49.275004  649678 command_runner.go:130] > # 	"FOWNER",
	I1006 14:21:49.275008  649678 command_runner.go:130] > # 	"SETGID",
	I1006 14:21:49.275026  649678 command_runner.go:130] > # 	"SETUID",
	I1006 14:21:49.275033  649678 command_runner.go:130] > # 	"SETPCAP",
	I1006 14:21:49.275037  649678 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1006 14:21:49.275040  649678 command_runner.go:130] > # 	"KILL",
	I1006 14:21:49.275043  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275051  649678 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1006 14:21:49.275059  649678 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1006 14:21:49.275064  649678 command_runner.go:130] > # add_inheritable_capabilities = false
	I1006 14:21:49.275071  649678 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1006 14:21:49.275077  649678 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1006 14:21:49.275083  649678 command_runner.go:130] > default_sysctls = [
	I1006 14:21:49.275087  649678 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1006 14:21:49.275090  649678 command_runner.go:130] > ]
	I1006 14:21:49.275096  649678 command_runner.go:130] > # List of devices on the host that a
	I1006 14:21:49.275104  649678 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1006 14:21:49.275109  649678 command_runner.go:130] > # allowed_devices = [
	I1006 14:21:49.275122  649678 command_runner.go:130] > # 	"/dev/fuse",
	I1006 14:21:49.275128  649678 command_runner.go:130] > # 	"/dev/net/tun",
	I1006 14:21:49.275132  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275136  649678 command_runner.go:130] > # List of additional devices. specified as
	I1006 14:21:49.275146  649678 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1006 14:21:49.275151  649678 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1006 14:21:49.275156  649678 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1006 14:21:49.275162  649678 command_runner.go:130] > # additional_devices = [
	I1006 14:21:49.275166  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275170  649678 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1006 14:21:49.275176  649678 command_runner.go:130] > # cdi_spec_dirs = [
	I1006 14:21:49.275180  649678 command_runner.go:130] > # 	"/etc/cdi",
	I1006 14:21:49.275184  649678 command_runner.go:130] > # 	"/var/run/cdi",
	I1006 14:21:49.275189  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275195  649678 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1006 14:21:49.275216  649678 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1006 14:21:49.275225  649678 command_runner.go:130] > # Defaults to false.
	I1006 14:21:49.275239  649678 command_runner.go:130] > # device_ownership_from_security_context = false
	I1006 14:21:49.275249  649678 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1006 14:21:49.275255  649678 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1006 14:21:49.275262  649678 command_runner.go:130] > # hooks_dir = [
	I1006 14:21:49.275267  649678 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1006 14:21:49.275273  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275278  649678 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1006 14:21:49.275284  649678 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1006 14:21:49.275292  649678 command_runner.go:130] > # its default mounts from the following two files:
	I1006 14:21:49.275295  649678 command_runner.go:130] > #
	I1006 14:21:49.275300  649678 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1006 14:21:49.275309  649678 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1006 14:21:49.275315  649678 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1006 14:21:49.275328  649678 command_runner.go:130] > #
	I1006 14:21:49.275338  649678 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1006 14:21:49.275345  649678 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1006 14:21:49.275353  649678 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1006 14:21:49.275358  649678 command_runner.go:130] > #      only add mounts it finds in this file.
	I1006 14:21:49.275364  649678 command_runner.go:130] > #
	I1006 14:21:49.275370  649678 command_runner.go:130] > # default_mounts_file = ""
	I1006 14:21:49.275378  649678 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1006 14:21:49.275385  649678 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1006 14:21:49.275391  649678 command_runner.go:130] > # pids_limit = -1
	I1006 14:21:49.275398  649678 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1006 14:21:49.275406  649678 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1006 14:21:49.275412  649678 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1006 14:21:49.275420  649678 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1006 14:21:49.275426  649678 command_runner.go:130] > # log_size_max = -1
	I1006 14:21:49.275433  649678 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1006 14:21:49.275439  649678 command_runner.go:130] > # log_to_journald = false
	I1006 14:21:49.275445  649678 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1006 14:21:49.275452  649678 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1006 14:21:49.275457  649678 command_runner.go:130] > # Path to directory for container attach sockets.
	I1006 14:21:49.275463  649678 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1006 14:21:49.275467  649678 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1006 14:21:49.275474  649678 command_runner.go:130] > # bind_mount_prefix = ""
	I1006 14:21:49.275479  649678 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1006 14:21:49.275485  649678 command_runner.go:130] > # read_only = false
	I1006 14:21:49.275491  649678 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1006 14:21:49.275497  649678 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1006 14:21:49.275504  649678 command_runner.go:130] > # live configuration reload.
	I1006 14:21:49.275508  649678 command_runner.go:130] > # log_level = "info"
	I1006 14:21:49.275513  649678 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1006 14:21:49.275521  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.275525  649678 command_runner.go:130] > # log_filter = ""
	I1006 14:21:49.275530  649678 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1006 14:21:49.275542  649678 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1006 14:21:49.275549  649678 command_runner.go:130] > # separated by comma.
	I1006 14:21:49.275557  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275563  649678 command_runner.go:130] > # uid_mappings = ""
	I1006 14:21:49.275569  649678 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1006 14:21:49.275577  649678 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1006 14:21:49.275585  649678 command_runner.go:130] > # separated by comma.
	I1006 14:21:49.275594  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275598  649678 command_runner.go:130] > # gid_mappings = ""
	I1006 14:21:49.275606  649678 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1006 14:21:49.275614  649678 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1006 14:21:49.275621  649678 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1006 14:21:49.275630  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275634  649678 command_runner.go:130] > # minimum_mappable_uid = -1
	I1006 14:21:49.275640  649678 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1006 14:21:49.275648  649678 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1006 14:21:49.275654  649678 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1006 14:21:49.275664  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275668  649678 command_runner.go:130] > # minimum_mappable_gid = -1
	I1006 14:21:49.275676  649678 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1006 14:21:49.275683  649678 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1006 14:21:49.275690  649678 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1006 14:21:49.275694  649678 command_runner.go:130] > # ctr_stop_timeout = 30
	I1006 14:21:49.275700  649678 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1006 14:21:49.275706  649678 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1006 14:21:49.275711  649678 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1006 14:21:49.275718  649678 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1006 14:21:49.275722  649678 command_runner.go:130] > # drop_infra_ctr = true
	I1006 14:21:49.275731  649678 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1006 14:21:49.275736  649678 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1006 14:21:49.275746  649678 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1006 14:21:49.275752  649678 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1006 14:21:49.275759  649678 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1006 14:21:49.275772  649678 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1006 14:21:49.275778  649678 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1006 14:21:49.275786  649678 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1006 14:21:49.275790  649678 command_runner.go:130] > # shared_cpuset = ""
	I1006 14:21:49.275800  649678 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1006 14:21:49.275805  649678 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1006 14:21:49.275811  649678 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1006 14:21:49.275817  649678 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1006 14:21:49.275824  649678 command_runner.go:130] > # pinns_path = ""
	I1006 14:21:49.275829  649678 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1006 14:21:49.275838  649678 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1006 14:21:49.275842  649678 command_runner.go:130] > # enable_criu_support = true
	I1006 14:21:49.275849  649678 command_runner.go:130] > # Enable/disable the generation of the container,
	I1006 14:21:49.275855  649678 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1006 14:21:49.275859  649678 command_runner.go:130] > # enable_pod_events = false
	I1006 14:21:49.275865  649678 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1006 14:21:49.275872  649678 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1006 14:21:49.275876  649678 command_runner.go:130] > # default_runtime = "crun"
	I1006 14:21:49.275880  649678 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1006 14:21:49.275887  649678 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1006 14:21:49.275898  649678 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1006 14:21:49.275906  649678 command_runner.go:130] > # creation as a file is not desired either.
	I1006 14:21:49.275914  649678 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1006 14:21:49.275921  649678 command_runner.go:130] > # the hostname is being managed dynamically.
	I1006 14:21:49.275925  649678 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1006 14:21:49.275930  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275936  649678 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1006 14:21:49.275945  649678 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1006 14:21:49.275951  649678 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1006 14:21:49.275955  649678 command_runner.go:130] > # Each entry in the table should follow the format:
	I1006 14:21:49.275961  649678 command_runner.go:130] > #
	I1006 14:21:49.275965  649678 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1006 14:21:49.275969  649678 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1006 14:21:49.275980  649678 command_runner.go:130] > # runtime_type = "oci"
	I1006 14:21:49.275988  649678 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1006 14:21:49.275993  649678 command_runner.go:130] > # inherit_default_runtime = false
	I1006 14:21:49.275997  649678 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1006 14:21:49.276002  649678 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1006 14:21:49.276009  649678 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1006 14:21:49.276013  649678 command_runner.go:130] > # monitor_env = []
	I1006 14:21:49.276020  649678 command_runner.go:130] > # privileged_without_host_devices = false
	I1006 14:21:49.276024  649678 command_runner.go:130] > # allowed_annotations = []
	I1006 14:21:49.276029  649678 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1006 14:21:49.276035  649678 command_runner.go:130] > # no_sync_log = false
	I1006 14:21:49.276039  649678 command_runner.go:130] > # default_annotations = {}
	I1006 14:21:49.276044  649678 command_runner.go:130] > # stream_websockets = false
	I1006 14:21:49.276052  649678 command_runner.go:130] > # seccomp_profile = ""
	I1006 14:21:49.276074  649678 command_runner.go:130] > # Where:
	I1006 14:21:49.276087  649678 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1006 14:21:49.276100  649678 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1006 14:21:49.276111  649678 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1006 14:21:49.276124  649678 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1006 14:21:49.276128  649678 command_runner.go:130] > #   in $PATH.
	I1006 14:21:49.276137  649678 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1006 14:21:49.276141  649678 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1006 14:21:49.276149  649678 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1006 14:21:49.276153  649678 command_runner.go:130] > #   state.
	I1006 14:21:49.276159  649678 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1006 14:21:49.276165  649678 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1006 14:21:49.276173  649678 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1006 14:21:49.276179  649678 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1006 14:21:49.276186  649678 command_runner.go:130] > #   the values from the default runtime on load time.
	I1006 14:21:49.276193  649678 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1006 14:21:49.276200  649678 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1006 14:21:49.276242  649678 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1006 14:21:49.276258  649678 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1006 14:21:49.276269  649678 command_runner.go:130] > #   The currently recognized values are:
	I1006 14:21:49.276276  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1006 14:21:49.276286  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1006 14:21:49.276294  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1006 14:21:49.276300  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1006 14:21:49.276308  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1006 14:21:49.276314  649678 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1006 14:21:49.276323  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1006 14:21:49.276330  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1006 14:21:49.276338  649678 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1006 14:21:49.276344  649678 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1006 14:21:49.276353  649678 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1006 14:21:49.276359  649678 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1006 14:21:49.276370  649678 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1006 14:21:49.276380  649678 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1006 14:21:49.276386  649678 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1006 14:21:49.276396  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1006 14:21:49.276402  649678 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1006 14:21:49.276409  649678 command_runner.go:130] > #   deprecated option "conmon".
	I1006 14:21:49.276416  649678 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1006 14:21:49.276423  649678 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1006 14:21:49.276429  649678 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1006 14:21:49.276437  649678 command_runner.go:130] > #   should be moved to the container's cgroup
	I1006 14:21:49.276444  649678 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1006 14:21:49.276451  649678 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1006 14:21:49.276459  649678 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1006 14:21:49.276465  649678 command_runner.go:130] > #   conmon-rs by using:
	I1006 14:21:49.276472  649678 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1006 14:21:49.276481  649678 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1006 14:21:49.276488  649678 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1006 14:21:49.276494  649678 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1006 14:21:49.276502  649678 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1006 14:21:49.276509  649678 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1006 14:21:49.276519  649678 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1006 14:21:49.276524  649678 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1006 14:21:49.276534  649678 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1006 14:21:49.276543  649678 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1006 14:21:49.276551  649678 command_runner.go:130] > #   when a machine crash happens.
	I1006 14:21:49.276558  649678 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1006 14:21:49.276568  649678 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1006 14:21:49.276576  649678 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1006 14:21:49.276583  649678 command_runner.go:130] > #   seccomp profile for the runtime.
	I1006 14:21:49.276589  649678 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1006 14:21:49.276598  649678 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1006 14:21:49.276601  649678 command_runner.go:130] > #
	I1006 14:21:49.276605  649678 command_runner.go:130] > # Using the seccomp notifier feature:
	I1006 14:21:49.276610  649678 command_runner.go:130] > #
	I1006 14:21:49.276617  649678 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1006 14:21:49.276626  649678 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1006 14:21:49.276629  649678 command_runner.go:130] > #
	I1006 14:21:49.276635  649678 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1006 14:21:49.276643  649678 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1006 14:21:49.276646  649678 command_runner.go:130] > #
	I1006 14:21:49.276655  649678 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1006 14:21:49.276664  649678 command_runner.go:130] > # feature.
	I1006 14:21:49.276670  649678 command_runner.go:130] > #
	I1006 14:21:49.276684  649678 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1006 14:21:49.276693  649678 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1006 14:21:49.276700  649678 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1006 14:21:49.276708  649678 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1006 14:21:49.276714  649678 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1006 14:21:49.276720  649678 command_runner.go:130] > #
	I1006 14:21:49.276726  649678 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1006 14:21:49.276734  649678 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1006 14:21:49.276737  649678 command_runner.go:130] > #
	I1006 14:21:49.276745  649678 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1006 14:21:49.276765  649678 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1006 14:21:49.276775  649678 command_runner.go:130] > #
	I1006 14:21:49.276785  649678 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1006 14:21:49.276795  649678 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1006 14:21:49.276798  649678 command_runner.go:130] > # limitation.
	I1006 14:21:49.276802  649678 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1006 14:21:49.276807  649678 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1006 14:21:49.276815  649678 command_runner.go:130] > runtime_type = ""
	I1006 14:21:49.276822  649678 command_runner.go:130] > runtime_root = "/run/crun"
	I1006 14:21:49.276833  649678 command_runner.go:130] > inherit_default_runtime = false
	I1006 14:21:49.276841  649678 command_runner.go:130] > runtime_config_path = ""
	I1006 14:21:49.276851  649678 command_runner.go:130] > container_min_memory = ""
	I1006 14:21:49.276860  649678 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1006 14:21:49.276871  649678 command_runner.go:130] > monitor_cgroup = "pod"
	I1006 14:21:49.276877  649678 command_runner.go:130] > monitor_exec_cgroup = ""
	I1006 14:21:49.276883  649678 command_runner.go:130] > allowed_annotations = [
	I1006 14:21:49.276890  649678 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1006 14:21:49.276896  649678 command_runner.go:130] > ]
	I1006 14:21:49.276902  649678 command_runner.go:130] > privileged_without_host_devices = false
	I1006 14:21:49.276909  649678 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1006 14:21:49.276916  649678 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1006 14:21:49.276922  649678 command_runner.go:130] > runtime_type = ""
	I1006 14:21:49.276929  649678 command_runner.go:130] > runtime_root = "/run/runc"
	I1006 14:21:49.276936  649678 command_runner.go:130] > inherit_default_runtime = false
	I1006 14:21:49.276946  649678 command_runner.go:130] > runtime_config_path = ""
	I1006 14:21:49.276954  649678 command_runner.go:130] > container_min_memory = ""
	I1006 14:21:49.276967  649678 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1006 14:21:49.276978  649678 command_runner.go:130] > monitor_cgroup = "pod"
	I1006 14:21:49.276984  649678 command_runner.go:130] > monitor_exec_cgroup = ""
	I1006 14:21:49.276991  649678 command_runner.go:130] > privileged_without_host_devices = false
	I1006 14:21:49.276998  649678 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1006 14:21:49.277005  649678 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1006 14:21:49.277012  649678 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1006 14:21:49.277036  649678 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1006 14:21:49.277057  649678 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1006 14:21:49.277077  649678 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1006 14:21:49.277093  649678 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1006 14:21:49.277104  649678 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1006 14:21:49.277125  649678 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1006 14:21:49.277141  649678 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1006 14:21:49.277151  649678 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1006 14:21:49.277167  649678 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1006 14:21:49.277177  649678 command_runner.go:130] > # Example:
	I1006 14:21:49.277189  649678 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1006 14:21:49.277201  649678 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1006 14:21:49.277225  649678 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1006 14:21:49.277238  649678 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1006 14:21:49.277249  649678 command_runner.go:130] > # cpuset = "0-1"
	I1006 14:21:49.277260  649678 command_runner.go:130] > # cpushares = "5"
	I1006 14:21:49.277270  649678 command_runner.go:130] > # cpuquota = "1000"
	I1006 14:21:49.277281  649678 command_runner.go:130] > # cpuperiod = "100000"
	I1006 14:21:49.277292  649678 command_runner.go:130] > # cpulimit = "35"
	I1006 14:21:49.277300  649678 command_runner.go:130] > # Where:
	I1006 14:21:49.277307  649678 command_runner.go:130] > # The workload name is workload-type.
	I1006 14:21:49.277323  649678 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1006 14:21:49.277336  649678 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1006 14:21:49.277349  649678 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1006 14:21:49.277366  649678 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1006 14:21:49.277381  649678 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1006 14:21:49.277393  649678 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1006 14:21:49.277406  649678 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1006 14:21:49.277416  649678 command_runner.go:130] > # Default value is set to true
	I1006 14:21:49.277427  649678 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1006 14:21:49.277441  649678 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1006 14:21:49.277453  649678 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1006 14:21:49.277465  649678 command_runner.go:130] > # Default value is set to 'false'
	I1006 14:21:49.277479  649678 command_runner.go:130] > # disable_hostport_mapping = false
	I1006 14:21:49.277492  649678 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1006 14:21:49.277513  649678 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1006 14:21:49.277521  649678 command_runner.go:130] > # timezone = ""
	I1006 14:21:49.277531  649678 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1006 14:21:49.277536  649678 command_runner.go:130] > #
	I1006 14:21:49.277547  649678 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1006 14:21:49.277557  649678 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1006 14:21:49.277565  649678 command_runner.go:130] > [crio.image]
	I1006 14:21:49.277578  649678 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1006 14:21:49.277589  649678 command_runner.go:130] > # default_transport = "docker://"
	I1006 14:21:49.277603  649678 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1006 14:21:49.277617  649678 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1006 14:21:49.277627  649678 command_runner.go:130] > # global_auth_file = ""
	I1006 14:21:49.277652  649678 command_runner.go:130] > # The image used to instantiate infra containers.
	I1006 14:21:49.277665  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.277675  649678 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1006 14:21:49.277690  649678 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1006 14:21:49.277704  649678 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1006 14:21:49.277715  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.277730  649678 command_runner.go:130] > # pause_image_auth_file = ""
	I1006 14:21:49.277741  649678 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1006 14:21:49.277755  649678 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1006 14:21:49.277770  649678 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1006 14:21:49.277785  649678 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1006 14:21:49.277796  649678 command_runner.go:130] > # pause_command = "/pause"
	I1006 14:21:49.277811  649678 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1006 14:21:49.277824  649678 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1006 14:21:49.277838  649678 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1006 14:21:49.277851  649678 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1006 14:21:49.277864  649678 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1006 14:21:49.277879  649678 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1006 14:21:49.277889  649678 command_runner.go:130] > # pinned_images = [
	I1006 14:21:49.277904  649678 command_runner.go:130] > # ]
	I1006 14:21:49.277918  649678 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1006 14:21:49.277929  649678 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1006 14:21:49.277943  649678 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1006 14:21:49.277957  649678 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1006 14:21:49.277969  649678 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1006 14:21:49.277982  649678 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1006 14:21:49.277994  649678 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1006 14:21:49.278013  649678 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1006 14:21:49.278025  649678 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1006 14:21:49.278042  649678 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1006 14:21:49.278056  649678 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1006 14:21:49.278069  649678 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1006 14:21:49.278083  649678 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1006 14:21:49.278099  649678 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1006 14:21:49.278109  649678 command_runner.go:130] > # changing them here.
	I1006 14:21:49.278127  649678 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1006 14:21:49.278138  649678 command_runner.go:130] > # insecure_registries = [
	I1006 14:21:49.278148  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278163  649678 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1006 14:21:49.278181  649678 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1006 14:21:49.278192  649678 command_runner.go:130] > # image_volumes = "mkdir"
	I1006 14:21:49.278214  649678 command_runner.go:130] > # Temporary directory to use for storing big files
	I1006 14:21:49.278227  649678 command_runner.go:130] > # big_files_temporary_dir = ""
	I1006 14:21:49.278237  649678 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1006 14:21:49.278253  649678 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1006 14:21:49.278265  649678 command_runner.go:130] > # auto_reload_registries = false
	I1006 14:21:49.278278  649678 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1006 14:21:49.278294  649678 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1006 14:21:49.278307  649678 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1006 14:21:49.278317  649678 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1006 14:21:49.278329  649678 command_runner.go:130] > # The mode of short name resolution.
	I1006 14:21:49.278343  649678 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1006 14:21:49.278364  649678 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1006 14:21:49.278377  649678 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1006 14:21:49.278389  649678 command_runner.go:130] > # short_name_mode = "enforcing"
	I1006 14:21:49.278403  649678 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1006 14:21:49.278414  649678 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1006 14:21:49.278425  649678 command_runner.go:130] > # oci_artifact_mount_support = true
	I1006 14:21:49.278440  649678 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1006 14:21:49.278450  649678 command_runner.go:130] > # CNI plugins.
	I1006 14:21:49.278460  649678 command_runner.go:130] > [crio.network]
	I1006 14:21:49.278474  649678 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1006 14:21:49.278486  649678 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1006 14:21:49.278497  649678 command_runner.go:130] > # cni_default_network = ""
	I1006 14:21:49.278508  649678 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1006 14:21:49.278519  649678 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1006 14:21:49.278532  649678 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1006 14:21:49.278543  649678 command_runner.go:130] > # plugin_dirs = [
	I1006 14:21:49.278554  649678 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1006 14:21:49.278563  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278574  649678 command_runner.go:130] > # List of included pod metrics.
	I1006 14:21:49.278586  649678 command_runner.go:130] > # included_pod_metrics = [
	I1006 14:21:49.278594  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278605  649678 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1006 14:21:49.278615  649678 command_runner.go:130] > [crio.metrics]
	I1006 14:21:49.278627  649678 command_runner.go:130] > # Globally enable or disable metrics support.
	I1006 14:21:49.278639  649678 command_runner.go:130] > # enable_metrics = false
	I1006 14:21:49.278651  649678 command_runner.go:130] > # Specify enabled metrics collectors.
	I1006 14:21:49.278662  649678 command_runner.go:130] > # Per default all metrics are enabled.
	I1006 14:21:49.278676  649678 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1006 14:21:49.278689  649678 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1006 14:21:49.278700  649678 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1006 14:21:49.278712  649678 command_runner.go:130] > # metrics_collectors = [
	I1006 14:21:49.278718  649678 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1006 14:21:49.278727  649678 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1006 14:21:49.278740  649678 command_runner.go:130] > # 	"containers_oom_total",
	I1006 14:21:49.278747  649678 command_runner.go:130] > # 	"processes_defunct",
	I1006 14:21:49.278754  649678 command_runner.go:130] > # 	"operations_total",
	I1006 14:21:49.278761  649678 command_runner.go:130] > # 	"operations_latency_seconds",
	I1006 14:21:49.278769  649678 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1006 14:21:49.278776  649678 command_runner.go:130] > # 	"operations_errors_total",
	I1006 14:21:49.278786  649678 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1006 14:21:49.278798  649678 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1006 14:21:49.278810  649678 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1006 14:21:49.278822  649678 command_runner.go:130] > # 	"image_pulls_success_total",
	I1006 14:21:49.278833  649678 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1006 14:21:49.278844  649678 command_runner.go:130] > # 	"containers_oom_count_total",
	I1006 14:21:49.278856  649678 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1006 14:21:49.278867  649678 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1006 14:21:49.278878  649678 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1006 14:21:49.278886  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278896  649678 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1006 14:21:49.278907  649678 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1006 14:21:49.278916  649678 command_runner.go:130] > # The port on which the metrics server will listen.
	I1006 14:21:49.278927  649678 command_runner.go:130] > # metrics_port = 9090
	I1006 14:21:49.278939  649678 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1006 14:21:49.278950  649678 command_runner.go:130] > # metrics_socket = ""
	I1006 14:21:49.278962  649678 command_runner.go:130] > # The certificate for the secure metrics server.
	I1006 14:21:49.278975  649678 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1006 14:21:49.278986  649678 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1006 14:21:49.278998  649678 command_runner.go:130] > # certificate on any modification event.
	I1006 14:21:49.279009  649678 command_runner.go:130] > # metrics_cert = ""
	I1006 14:21:49.279018  649678 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1006 14:21:49.279031  649678 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1006 14:21:49.279042  649678 command_runner.go:130] > # metrics_key = ""
	I1006 14:21:49.279054  649678 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1006 14:21:49.279065  649678 command_runner.go:130] > [crio.tracing]
	I1006 14:21:49.279078  649678 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1006 14:21:49.279088  649678 command_runner.go:130] > # enable_tracing = false
	I1006 14:21:49.279100  649678 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1006 14:21:49.279118  649678 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1006 14:21:49.279133  649678 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1006 14:21:49.279145  649678 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1006 14:21:49.279155  649678 command_runner.go:130] > # CRI-O NRI configuration.
	I1006 14:21:49.279165  649678 command_runner.go:130] > [crio.nri]
	I1006 14:21:49.279176  649678 command_runner.go:130] > # Globally enable or disable NRI.
	I1006 14:21:49.279185  649678 command_runner.go:130] > # enable_nri = true
	I1006 14:21:49.279195  649678 command_runner.go:130] > # NRI socket to listen on.
	I1006 14:21:49.279220  649678 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1006 14:21:49.279232  649678 command_runner.go:130] > # NRI plugin directory to use.
	I1006 14:21:49.279239  649678 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1006 14:21:49.279251  649678 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1006 14:21:49.279263  649678 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1006 14:21:49.279276  649678 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1006 14:21:49.279348  649678 command_runner.go:130] > # nri_disable_connections = false
	I1006 14:21:49.279363  649678 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1006 14:21:49.279371  649678 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1006 14:21:49.279381  649678 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1006 14:21:49.279393  649678 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1006 14:21:49.279404  649678 command_runner.go:130] > # NRI default validator configuration.
	I1006 14:21:49.279420  649678 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1006 14:21:49.279434  649678 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1006 14:21:49.279445  649678 command_runner.go:130] > # can be restricted/rejected:
	I1006 14:21:49.279455  649678 command_runner.go:130] > # - OCI hook injection
	I1006 14:21:49.279467  649678 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1006 14:21:49.279479  649678 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1006 14:21:49.279488  649678 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1006 14:21:49.279499  649678 command_runner.go:130] > # - adjustment of linux namespaces
	I1006 14:21:49.279513  649678 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1006 14:21:49.279528  649678 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1006 14:21:49.279541  649678 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1006 14:21:49.279550  649678 command_runner.go:130] > #
	I1006 14:21:49.279561  649678 command_runner.go:130] > # [crio.nri.default_validator]
	I1006 14:21:49.279574  649678 command_runner.go:130] > # nri_enable_default_validator = false
	I1006 14:21:49.279587  649678 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1006 14:21:49.279600  649678 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1006 14:21:49.279613  649678 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1006 14:21:49.279626  649678 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1006 14:21:49.279636  649678 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1006 14:21:49.279646  649678 command_runner.go:130] > # nri_validator_required_plugins = [
	I1006 14:21:49.279656  649678 command_runner.go:130] > # ]
	I1006 14:21:49.279668  649678 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1006 14:21:49.279681  649678 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1006 14:21:49.279691  649678 command_runner.go:130] > [crio.stats]
	I1006 14:21:49.279704  649678 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1006 14:21:49.279717  649678 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1006 14:21:49.279728  649678 command_runner.go:130] > # stats_collection_period = 0
	I1006 14:21:49.279739  649678 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1006 14:21:49.279753  649678 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1006 14:21:49.279764  649678 command_runner.go:130] > # collection_period = 0
	I1006 14:21:49.279811  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258239123Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1006 14:21:49.279828  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258265766Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1006 14:21:49.279842  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258283938Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1006 14:21:49.279857  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.25830256Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1006 14:21:49.279875  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258357499Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:49.279892  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258517334Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1006 14:21:49.279912  649678 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1006 14:21:49.280045  649678 cni.go:84] Creating CNI manager for ""
	I1006 14:21:49.280059  649678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:21:49.280078  649678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:21:49.280122  649678 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-135520 NodeName:functional-135520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:21:49.280303  649678 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-135520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:21:49.280384  649678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:21:49.288800  649678 command_runner.go:130] > kubeadm
	I1006 14:21:49.288826  649678 command_runner.go:130] > kubectl
	I1006 14:21:49.288833  649678 command_runner.go:130] > kubelet
	I1006 14:21:49.288864  649678 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:21:49.288912  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:21:49.296476  649678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 14:21:49.308883  649678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:21:49.321172  649678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1006 14:21:49.333376  649678 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:21:49.336963  649678 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1006 14:21:49.337019  649678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:49.424422  649678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:21:49.437476  649678 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520 for IP: 192.168.49.2
	I1006 14:21:49.437505  649678 certs.go:195] generating shared ca certs ...
	I1006 14:21:49.437527  649678 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:49.437678  649678 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:21:49.437730  649678 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:21:49.437748  649678 certs.go:257] generating profile certs ...
	I1006 14:21:49.437847  649678 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key
	I1006 14:21:49.437896  649678 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key.72a46e8e
	I1006 14:21:49.437936  649678 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key
	I1006 14:21:49.437949  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:21:49.437963  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:21:49.437984  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:21:49.438003  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:21:49.438018  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:21:49.438035  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:21:49.438049  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:21:49.438064  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:21:49.438123  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:21:49.438160  649678 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:21:49.438171  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:21:49.438196  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:21:49.438246  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:21:49.438271  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:21:49.438316  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:21:49.438344  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.438359  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.438381  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.439032  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:21:49.456437  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:21:49.473578  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:21:49.490593  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:21:49.508347  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:21:49.525339  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:21:49.541997  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:21:49.558467  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:21:49.576359  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:21:49.593578  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:21:49.610863  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:21:49.628123  649678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:21:49.640270  649678 ssh_runner.go:195] Run: openssl version
	I1006 14:21:49.646279  649678 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1006 14:21:49.646391  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:21:49.654553  649678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.658110  649678 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.658254  649678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.658303  649678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.692318  649678 command_runner.go:130] > 3ec20f2e
	I1006 14:21:49.692406  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:21:49.700814  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:21:49.709140  649678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.712721  649678 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.712738  649678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.712772  649678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.745663  649678 command_runner.go:130] > b5213941
	I1006 14:21:49.745998  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:21:49.754083  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:21:49.762664  649678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.766415  649678 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.766461  649678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.766502  649678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.800644  649678 command_runner.go:130] > 51391683
	I1006 14:21:49.800985  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:21:49.809049  649678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:21:49.812721  649678 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:21:49.812776  649678 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1006 14:21:49.812784  649678 command_runner.go:130] > Device: 8,1	Inode: 580300      Links: 1
	I1006 14:21:49.812793  649678 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 14:21:49.812800  649678 command_runner.go:130] > Access: 2025-10-06 14:17:42.533320203 +0000
	I1006 14:21:49.812811  649678 command_runner.go:130] > Modify: 2025-10-06 14:13:37.457627952 +0000
	I1006 14:21:49.812819  649678 command_runner.go:130] > Change: 2025-10-06 14:13:37.457627952 +0000
	I1006 14:21:49.812829  649678 command_runner.go:130] >  Birth: 2025-10-06 14:13:37.457627952 +0000
	I1006 14:21:49.812886  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:21:49.846896  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.847277  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:21:49.881096  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.881431  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:21:49.916333  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.916837  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:21:49.951128  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.951323  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:21:49.984919  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.985255  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:21:50.018710  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:50.018987  649678 kubeadm.go:400] StartCluster: {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:50.019061  649678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:21:50.019118  649678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:21:50.047552  649678 cri.go:89] found id: ""
	I1006 14:21:50.047624  649678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:21:50.055103  649678 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1006 14:21:50.055125  649678 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1006 14:21:50.055137  649678 command_runner.go:130] > /var/lib/minikube/etcd:
	I1006 14:21:50.055780  649678 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:21:50.055795  649678 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:21:50.055835  649678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:21:50.063106  649678 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:21:50.063218  649678 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-135520" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.063263  649678 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-626179/kubeconfig needs updating (will repair): [kubeconfig missing "functional-135520" cluster setting kubeconfig missing "functional-135520" context setting]
	I1006 14:21:50.063581  649678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:50.064282  649678 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.064435  649678 kapi.go:59] client config for functional-135520: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:21:50.064874  649678 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 14:21:50.064894  649678 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 14:21:50.064898  649678 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 14:21:50.064902  649678 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 14:21:50.064906  649678 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 14:21:50.064950  649678 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1006 14:21:50.065393  649678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:21:50.072886  649678 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1006 14:21:50.072922  649678 kubeadm.go:601] duration metric: took 17.120794ms to restartPrimaryControlPlane
	I1006 14:21:50.072932  649678 kubeadm.go:402] duration metric: took 53.951913ms to StartCluster
	I1006 14:21:50.072948  649678 settings.go:142] acquiring lock: {Name:mk49b10f71f24d1f54d5c453b3b04e717e9a9100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:50.073763  649678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.074346  649678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:50.074579  649678 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:21:50.074661  649678 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 14:21:50.074799  649678 addons.go:69] Setting storage-provisioner=true in profile "functional-135520"
	I1006 14:21:50.074825  649678 addons.go:238] Setting addon storage-provisioner=true in "functional-135520"
	I1006 14:21:50.074761  649678 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:21:50.074866  649678 addons.go:69] Setting default-storageclass=true in profile "functional-135520"
	I1006 14:21:50.074859  649678 host.go:66] Checking if "functional-135520" exists ...
	I1006 14:21:50.074881  649678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-135520"
	I1006 14:21:50.075174  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:50.075488  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:50.077233  649678 out.go:179] * Verifying Kubernetes components...
	I1006 14:21:50.078370  649678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:50.095495  649678 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.095656  649678 kapi.go:59] client config for functional-135520: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:21:50.095938  649678 addons.go:238] Setting addon default-storageclass=true in "functional-135520"
	I1006 14:21:50.095974  649678 host.go:66] Checking if "functional-135520" exists ...
	I1006 14:21:50.096327  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:50.100068  649678 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:21:50.101767  649678 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.101786  649678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:21:50.101831  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:50.122986  649678 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.123017  649678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:21:50.123083  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:50.128190  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:50.141305  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:50.171892  649678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:21:50.185683  649678 node_ready.go:35] waiting up to 6m0s for node "functional-135520" to be "Ready" ...
	I1006 14:21:50.185842  649678 type.go:168] "Request Body" body=""
	I1006 14:21:50.185906  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:50.186211  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:50.238569  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.250369  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.297302  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.297371  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.297421  649678 retry.go:31] will retry after 341.445316ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.306094  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.306137  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.306156  649678 retry.go:31] will retry after 289.440052ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.596773  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.639555  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.652478  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.652547  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.652572  649678 retry.go:31] will retry after 276.474886ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.686728  649678 type.go:168] "Request Body" body=""
	I1006 14:21:50.686820  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:50.687192  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:50.696244  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.696297  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.696320  649678 retry.go:31] will retry after 208.115159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.904724  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.929427  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.961651  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.961718  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.961741  649678 retry.go:31] will retry after 526.763649ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.984274  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.988765  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.988799  649678 retry.go:31] will retry after 299.40846ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.186119  649678 type.go:168] "Request Body" body=""
	I1006 14:21:51.186232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:51.186600  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:51.288897  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:51.344296  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:51.344362  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.344390  649678 retry.go:31] will retry after 1.255489073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.489635  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:51.542509  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:51.545518  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.545558  649678 retry.go:31] will retry after 1.109395122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.686960  649678 type.go:168] "Request Body" body=""
	I1006 14:21:51.687044  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:51.687429  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:52.186098  649678 type.go:168] "Request Body" body=""
	I1006 14:21:52.186177  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:52.186579  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:52.186647  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:52.600133  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:52.654438  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:52.654496  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.654515  649678 retry.go:31] will retry after 1.609702337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.655551  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:52.686897  649678 type.go:168] "Request Body" body=""
	I1006 14:21:52.686998  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:52.687382  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:52.709517  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:52.709578  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.709602  649678 retry.go:31] will retry after 1.712984533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:53.186162  649678 type.go:168] "Request Body" body=""
	I1006 14:21:53.186283  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:53.186685  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:53.686305  649678 type.go:168] "Request Body" body=""
	I1006 14:21:53.686410  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:53.686778  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:54.186389  649678 type.go:168] "Request Body" body=""
	I1006 14:21:54.186497  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:54.186895  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:54.186974  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:54.265161  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:54.320415  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:54.320465  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.320484  649678 retry.go:31] will retry after 1.901708606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.423753  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:54.478522  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:54.478584  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.478619  649678 retry.go:31] will retry after 1.584586857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.685879  649678 type.go:168] "Request Body" body=""
	I1006 14:21:54.685954  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:54.686309  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:55.185880  649678 type.go:168] "Request Body" body=""
	I1006 14:21:55.185961  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:55.186309  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:55.685969  649678 type.go:168] "Request Body" body=""
	I1006 14:21:55.686071  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:55.686478  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:56.063981  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:56.118717  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:56.118774  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.118807  649678 retry.go:31] will retry after 2.733091815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.185931  649678 type.go:168] "Request Body" body=""
	I1006 14:21:56.186008  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:56.186344  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:56.222525  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:56.276120  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:56.276196  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.276235  649678 retry.go:31] will retry after 1.816128137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.686920  649678 type.go:168] "Request Body" body=""
	I1006 14:21:56.687009  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:56.687408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:56.687471  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:57.186225  649678 type.go:168] "Request Body" body=""
	I1006 14:21:57.186314  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:57.186655  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:57.686516  649678 type.go:168] "Request Body" body=""
	I1006 14:21:57.686601  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:57.686915  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:58.093526  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:58.148989  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:58.149041  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.149066  649678 retry.go:31] will retry after 2.492749577s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.186253  649678 type.go:168] "Request Body" body=""
	I1006 14:21:58.186345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:58.186702  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:58.686540  649678 type.go:168] "Request Body" body=""
	I1006 14:21:58.686625  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:58.686963  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:58.852333  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:58.907770  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:58.907811  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.907831  649678 retry.go:31] will retry after 3.408188619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:59.186242  649678 type.go:168] "Request Body" body=""
	I1006 14:21:59.186325  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:59.186705  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:59.186784  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:59.686631  649678 type.go:168] "Request Body" body=""
	I1006 14:21:59.686729  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:59.687112  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:00.185903  649678 type.go:168] "Request Body" body=""
	I1006 14:22:00.185998  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:00.186365  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:00.642984  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:00.686799  649678 type.go:168] "Request Body" body=""
	I1006 14:22:00.686880  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:00.687243  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:00.698375  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:00.698427  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:00.698448  649678 retry.go:31] will retry after 6.594317937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:01.186036  649678 type.go:168] "Request Body" body=""
	I1006 14:22:01.186143  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:01.186563  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:01.686476  649678 type.go:168] "Request Body" body=""
	I1006 14:22:01.686584  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:01.686981  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:01.687058  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:02.186608  649678 type.go:168] "Request Body" body=""
	I1006 14:22:02.186705  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:02.187061  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:02.316279  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:02.370200  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:02.373358  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:02.373390  649678 retry.go:31] will retry after 5.569612861s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:02.686858  649678 type.go:168] "Request Body" body=""
	I1006 14:22:02.686947  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:02.687350  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:03.185954  649678 type.go:168] "Request Body" body=""
	I1006 14:22:03.186035  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:03.186451  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:03.686069  649678 type.go:168] "Request Body" body=""
	I1006 14:22:03.686185  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:03.686679  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:04.186146  649678 type.go:168] "Request Body" body=""
	I1006 14:22:04.186265  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:04.186682  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:04.186759  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:04.686312  649678 type.go:168] "Request Body" body=""
	I1006 14:22:04.686448  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:04.686778  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:05.186355  649678 type.go:168] "Request Body" body=""
	I1006 14:22:05.186442  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:05.186804  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:05.686470  649678 type.go:168] "Request Body" body=""
	I1006 14:22:05.686548  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:05.686892  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:06.186409  649678 type.go:168] "Request Body" body=""
	I1006 14:22:06.186493  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:06.186841  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:06.186906  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:06.686653  649678 type.go:168] "Request Body" body=""
	I1006 14:22:06.686731  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:06.687077  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:07.186430  649678 type.go:168] "Request Body" body=""
	I1006 14:22:07.186515  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:07.186850  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:07.293062  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:07.347879  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:07.347938  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:07.347958  649678 retry.go:31] will retry after 11.599769479s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:07.686422  649678 type.go:168] "Request Body" body=""
	I1006 14:22:07.686519  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:07.686919  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:07.943325  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:07.994639  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:07.997627  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:07.997659  649678 retry.go:31] will retry after 6.982471195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:08.186017  649678 type.go:168] "Request Body" body=""
	I1006 14:22:08.186095  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:08.186523  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:08.686113  649678 type.go:168] "Request Body" body=""
	I1006 14:22:08.686234  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:08.686617  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:08.686693  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:09.186236  649678 type.go:168] "Request Body" body=""
	I1006 14:22:09.186345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:09.186717  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:09.686283  649678 type.go:168] "Request Body" body=""
	I1006 14:22:09.686365  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:09.686759  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:10.186558  649678 type.go:168] "Request Body" body=""
	I1006 14:22:10.186657  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:10.187046  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:10.686665  649678 type.go:168] "Request Body" body=""
	I1006 14:22:10.686743  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:10.687116  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:10.687244  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:11.186799  649678 type.go:168] "Request Body" body=""
	I1006 14:22:11.186892  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:11.187296  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:11.686074  649678 type.go:168] "Request Body" body=""
	I1006 14:22:11.686224  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:11.686586  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:12.186151  649678 type.go:168] "Request Body" body=""
	I1006 14:22:12.186305  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:12.186696  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:12.686260  649678 type.go:168] "Request Body" body=""
	I1006 14:22:12.686345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:12.686706  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:13.186307  649678 type.go:168] "Request Body" body=""
	I1006 14:22:13.186418  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:13.186788  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:13.186857  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:13.686381  649678 type.go:168] "Request Body" body=""
	I1006 14:22:13.686488  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:13.686854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:14.186497  649678 type.go:168] "Request Body" body=""
	I1006 14:22:14.186592  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:14.186941  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:14.686598  649678 type.go:168] "Request Body" body=""
	I1006 14:22:14.686682  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:14.687029  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:14.980397  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:15.034191  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:15.034263  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:15.034288  649678 retry.go:31] will retry after 12.004605903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:15.186550  649678 type.go:168] "Request Body" body=""
	I1006 14:22:15.186633  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:15.187020  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:15.187102  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:15.686717  649678 type.go:168] "Request Body" body=""
	I1006 14:22:15.686812  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:15.687196  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:16.186809  649678 type.go:168] "Request Body" body=""
	I1006 14:22:16.186884  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:16.187256  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:16.686013  649678 type.go:168] "Request Body" body=""
	I1006 14:22:16.686098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:16.686488  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:17.186068  649678 type.go:168] "Request Body" body=""
	I1006 14:22:17.186146  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:17.186573  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:17.686133  649678 type.go:168] "Request Body" body=""
	I1006 14:22:17.686253  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:17.686622  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:17.686699  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:18.186192  649678 type.go:168] "Request Body" body=""
	I1006 14:22:18.186295  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:18.186693  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:18.686281  649678 type.go:168] "Request Body" body=""
	I1006 14:22:18.686358  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:18.686685  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:18.948057  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:19.002723  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:19.002770  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:19.002791  649678 retry.go:31] will retry after 9.663618433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:19.186105  649678 type.go:168] "Request Body" body=""
	I1006 14:22:19.186250  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:19.186659  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:19.686518  649678 type.go:168] "Request Body" body=""
	I1006 14:22:19.686605  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:19.686939  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:19.687009  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:20.186860  649678 type.go:168] "Request Body" body=""
	I1006 14:22:20.186965  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:20.187367  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:20.686167  649678 type.go:168] "Request Body" body=""
	I1006 14:22:20.686275  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:20.686635  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:21.186460  649678 type.go:168] "Request Body" body=""
	I1006 14:22:21.186548  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:21.186942  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:21.686821  649678 type.go:168] "Request Body" body=""
	I1006 14:22:21.686902  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:21.687332  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:21.687397  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:22.186083  649678 type.go:168] "Request Body" body=""
	I1006 14:22:22.186166  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:22.186569  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:22.686397  649678 type.go:168] "Request Body" body=""
	I1006 14:22:22.686491  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:22.686903  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:23.186781  649678 type.go:168] "Request Body" body=""
	I1006 14:22:23.186870  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:23.187268  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:23.686042  649678 type.go:168] "Request Body" body=""
	I1006 14:22:23.686129  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:23.686575  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:24.186356  649678 type.go:168] "Request Body" body=""
	I1006 14:22:24.186489  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:24.186921  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:24.187013  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:24.686802  649678 type.go:168] "Request Body" body=""
	I1006 14:22:24.686904  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:24.687313  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:25.186100  649678 type.go:168] "Request Body" body=""
	I1006 14:22:25.186254  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:25.186644  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:25.686394  649678 type.go:168] "Request Body" body=""
	I1006 14:22:25.686478  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:25.686854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:26.186709  649678 type.go:168] "Request Body" body=""
	I1006 14:22:26.186843  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:26.187291  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:26.187357  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:26.686108  649678 type.go:168] "Request Body" body=""
	I1006 14:22:26.686232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:26.686608  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:27.039059  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:27.094007  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:27.097496  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:27.097534  649678 retry.go:31] will retry after 22.614868096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:27.186847  649678 type.go:168] "Request Body" body=""
	I1006 14:22:27.186925  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:27.187319  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:27.686152  649678 type.go:168] "Request Body" body=""
	I1006 14:22:27.686302  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:27.686651  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:28.186562  649678 type.go:168] "Request Body" body=""
	I1006 14:22:28.186655  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:28.187109  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:28.666677  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:28.686315  649678 type.go:168] "Request Body" body=""
	I1006 14:22:28.686424  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:28.686765  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:28.686846  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:28.722750  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:28.722794  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:28.722814  649678 retry.go:31] will retry after 11.553901016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:29.186360  649678 type.go:168] "Request Body" body=""
	I1006 14:22:29.186463  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:29.186854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:29.686594  649678 type.go:168] "Request Body" body=""
	I1006 14:22:29.686674  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:29.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:30.186847  649678 type.go:168] "Request Body" body=""
	I1006 14:22:30.186978  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:30.187394  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:30.685980  649678 type.go:168] "Request Body" body=""
	I1006 14:22:30.686063  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:30.686514  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:31.186103  649678 type.go:168] "Request Body" body=""
	I1006 14:22:31.186273  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:31.186671  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:31.186735  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:31.686585  649678 type.go:168] "Request Body" body=""
	I1006 14:22:31.686699  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:31.687091  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:32.186757  649678 type.go:168] "Request Body" body=""
	I1006 14:22:32.186864  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:32.187311  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:32.685887  649678 type.go:168] "Request Body" body=""
	I1006 14:22:32.685973  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:32.686388  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:33.186057  649678 type.go:168] "Request Body" body=""
	I1006 14:22:33.186156  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:33.186557  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:33.686144  649678 type.go:168] "Request Body" body=""
	I1006 14:22:33.686262  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:33.686648  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:33.686721  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:34.186259  649678 type.go:168] "Request Body" body=""
	I1006 14:22:34.186354  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:34.186737  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:34.686419  649678 type.go:168] "Request Body" body=""
	I1006 14:22:34.686498  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:34.686871  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:35.186497  649678 type.go:168] "Request Body" body=""
	I1006 14:22:35.186603  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:35.186980  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:35.686662  649678 type.go:168] "Request Body" body=""
	I1006 14:22:35.686763  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:35.687122  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:35.687197  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:36.186754  649678 type.go:168] "Request Body" body=""
	I1006 14:22:36.186848  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:36.187316  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:36.686164  649678 type.go:168] "Request Body" body=""
	I1006 14:22:36.686314  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:36.686722  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:37.186321  649678 type.go:168] "Request Body" body=""
	I1006 14:22:37.186420  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:37.186775  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:37.686633  649678 type.go:168] "Request Body" body=""
	I1006 14:22:37.686715  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:37.687101  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:38.185900  649678 type.go:168] "Request Body" body=""
	I1006 14:22:38.185994  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:38.186391  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:38.186465  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:38.686198  649678 type.go:168] "Request Body" body=""
	I1006 14:22:38.686309  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:38.686708  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:39.186526  649678 type.go:168] "Request Body" body=""
	I1006 14:22:39.186655  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:39.187049  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:39.685917  649678 type.go:168] "Request Body" body=""
	I1006 14:22:39.686005  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:39.686446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:40.186230  649678 type.go:168] "Request Body" body=""
	I1006 14:22:40.186337  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:40.186733  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:40.186801  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:40.276916  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:40.331801  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:40.335179  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:40.335232  649678 retry.go:31] will retry after 39.41387573s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:40.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:22:40.686899  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:40.687303  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:41.186091  649678 type.go:168] "Request Body" body=""
	I1006 14:22:41.186200  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:41.186603  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:41.686526  649678 type.go:168] "Request Body" body=""
	I1006 14:22:41.686626  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:41.687010  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:42.186887  649678 type.go:168] "Request Body" body=""
	I1006 14:22:42.186964  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:42.187345  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:42.187421  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:42.686150  649678 type.go:168] "Request Body" body=""
	I1006 14:22:42.686267  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:42.686658  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:43.186527  649678 type.go:168] "Request Body" body=""
	I1006 14:22:43.186614  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:43.186999  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:43.686820  649678 type.go:168] "Request Body" body=""
	I1006 14:22:43.686909  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:43.687318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:44.186096  649678 type.go:168] "Request Body" body=""
	I1006 14:22:44.186247  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:44.186640  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:44.686530  649678 type.go:168] "Request Body" body=""
	I1006 14:22:44.686615  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:44.687010  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:44.687087  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:45.186889  649678 type.go:168] "Request Body" body=""
	I1006 14:22:45.186975  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:45.187340  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:45.686094  649678 type.go:168] "Request Body" body=""
	I1006 14:22:45.686177  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:45.686579  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:46.186357  649678 type.go:168] "Request Body" body=""
	I1006 14:22:46.186468  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:46.186826  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:46.686734  649678 type.go:168] "Request Body" body=""
	I1006 14:22:46.686824  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:46.687252  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:46.687331  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:47.186069  649678 type.go:168] "Request Body" body=""
	I1006 14:22:47.186155  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:47.186586  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:47.686023  649678 type.go:168] "Request Body" body=""
	I1006 14:22:47.686126  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:47.686582  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:48.186406  649678 type.go:168] "Request Body" body=""
	I1006 14:22:48.186501  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:48.186908  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:48.686766  649678 type.go:168] "Request Body" body=""
	I1006 14:22:48.686850  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:48.687229  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:49.186033  649678 type.go:168] "Request Body" body=""
	I1006 14:22:49.186123  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:49.186550  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:49.186623  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:49.686385  649678 type.go:168] "Request Body" body=""
	I1006 14:22:49.686504  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:49.686900  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:49.713160  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:49.766183  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:49.769572  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:49.769611  649678 retry.go:31] will retry after 48.442133458s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:50.186500  649678 type.go:168] "Request Body" body=""
	I1006 14:22:50.186594  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:50.186974  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:50.686617  649678 type.go:168] "Request Body" body=""
	I1006 14:22:50.686714  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:50.687119  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:51.186841  649678 type.go:168] "Request Body" body=""
	I1006 14:22:51.186935  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:51.187337  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:51.187405  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:51.686028  649678 type.go:168] "Request Body" body=""
	I1006 14:22:51.686127  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:51.686519  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:52.186126  649678 type.go:168] "Request Body" body=""
	I1006 14:22:52.186243  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:52.186633  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:52.686285  649678 type.go:168] "Request Body" body=""
	I1006 14:22:52.686514  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:52.686906  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:53.186666  649678 type.go:168] "Request Body" body=""
	I1006 14:22:53.186777  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:53.187137  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:53.686806  649678 type.go:168] "Request Body" body=""
	I1006 14:22:53.686890  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:53.687265  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:53.687341  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:54.186883  649678 type.go:168] "Request Body" body=""
	I1006 14:22:54.186965  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:54.187357  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:54.685948  649678 type.go:168] "Request Body" body=""
	I1006 14:22:54.686032  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:54.686415  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:55.186002  649678 type.go:168] "Request Body" body=""
	I1006 14:22:55.186183  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:55.186574  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:55.686131  649678 type.go:168] "Request Body" body=""
	I1006 14:22:55.686232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:55.686601  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:56.186156  649678 type.go:168] "Request Body" body=""
	I1006 14:22:56.186256  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:56.186593  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:56.186664  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:56.686450  649678 type.go:168] "Request Body" body=""
	I1006 14:22:56.686613  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:56.686999  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:57.186661  649678 type.go:168] "Request Body" body=""
	I1006 14:22:57.186772  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:57.187148  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:57.686783  649678 type.go:168] "Request Body" body=""
	I1006 14:22:57.686883  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:57.687277  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:58.185869  649678 type.go:168] "Request Body" body=""
	I1006 14:22:58.185950  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:58.186323  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:58.686036  649678 type.go:168] "Request Body" body=""
	I1006 14:22:58.686125  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:58.686521  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:58.686591  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:59.186311  649678 type.go:168] "Request Body" body=""
	I1006 14:22:59.186404  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:59.186765  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:59.686602  649678 type.go:168] "Request Body" body=""
	I1006 14:22:59.686706  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:59.687089  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:00.186937  649678 type.go:168] "Request Body" body=""
	I1006 14:23:00.187019  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:00.187408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:00.686157  649678 type.go:168] "Request Body" body=""
	I1006 14:23:00.686313  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:00.686729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:00.686803  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:01.186684  649678 type.go:168] "Request Body" body=""
	I1006 14:23:01.186778  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:01.187151  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:01.685976  649678 type.go:168] "Request Body" body=""
	I1006 14:23:01.686057  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:01.686478  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:02.186289  649678 type.go:168] "Request Body" body=""
	I1006 14:23:02.186377  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:02.186737  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:02.686603  649678 type.go:168] "Request Body" body=""
	I1006 14:23:02.686684  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:02.687079  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:02.687190  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:03.185994  649678 type.go:168] "Request Body" body=""
	I1006 14:23:03.186088  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:03.186549  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:03.686032  649678 type.go:168] "Request Body" body=""
	I1006 14:23:03.686132  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:03.686549  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:04.186500  649678 type.go:168] "Request Body" body=""
	I1006 14:23:04.186631  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:04.187174  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:04.685994  649678 type.go:168] "Request Body" body=""
	I1006 14:23:04.686082  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:04.686484  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:05.186312  649678 type.go:168] "Request Body" body=""
	I1006 14:23:05.186407  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:05.186774  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:05.186835  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:05.686657  649678 type.go:168] "Request Body" body=""
	I1006 14:23:05.686791  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:05.687181  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:06.186003  649678 type.go:168] "Request Body" body=""
	I1006 14:23:06.186097  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:06.186563  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:06.686413  649678 type.go:168] "Request Body" body=""
	I1006 14:23:06.686495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:06.686915  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:07.186819  649678 type.go:168] "Request Body" body=""
	I1006 14:23:07.186902  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:07.187335  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:07.187443  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:07.685990  649678 type.go:168] "Request Body" body=""
	I1006 14:23:07.686084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:07.686517  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:08.186341  649678 type.go:168] "Request Body" body=""
	I1006 14:23:08.186420  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:08.186803  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:08.686701  649678 type.go:168] "Request Body" body=""
	I1006 14:23:08.686838  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:08.687297  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:09.186086  649678 type.go:168] "Request Body" body=""
	I1006 14:23:09.186254  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:09.186682  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:09.686607  649678 type.go:168] "Request Body" body=""
	I1006 14:23:09.686743  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:09.687165  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:09.687290  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:10.185924  649678 type.go:168] "Request Body" body=""
	I1006 14:23:10.186016  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:10.186459  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:10.686243  649678 type.go:168] "Request Body" body=""
	I1006 14:23:10.686352  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:10.686735  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:11.186644  649678 type.go:168] "Request Body" body=""
	I1006 14:23:11.186726  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:11.187073  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:11.685855  649678 type.go:168] "Request Body" body=""
	I1006 14:23:11.685945  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:11.686393  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:12.186196  649678 type.go:168] "Request Body" body=""
	I1006 14:23:12.186326  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:12.186700  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:12.186777  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:12.686603  649678 type.go:168] "Request Body" body=""
	I1006 14:23:12.686687  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:12.687185  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:13.186005  649678 type.go:168] "Request Body" body=""
	I1006 14:23:13.186125  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:13.186566  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:13.686384  649678 type.go:168] "Request Body" body=""
	I1006 14:23:13.686489  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:13.686889  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:14.186755  649678 type.go:168] "Request Body" body=""
	I1006 14:23:14.186840  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:14.187235  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:14.187324  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:14.686081  649678 type.go:168] "Request Body" body=""
	I1006 14:23:14.686227  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:14.686581  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:15.186411  649678 type.go:168] "Request Body" body=""
	I1006 14:23:15.186495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:15.186873  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:15.686769  649678 type.go:168] "Request Body" body=""
	I1006 14:23:15.686854  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:15.687247  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:16.186139  649678 type.go:168] "Request Body" body=""
	I1006 14:23:16.186276  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:16.186637  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:16.686871  649678 type.go:168] "Request Body" body=""
	I1006 14:23:16.686955  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:16.687341  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:16.687407  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:17.186133  649678 type.go:168] "Request Body" body=""
	I1006 14:23:17.186292  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:17.186672  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:17.686604  649678 type.go:168] "Request Body" body=""
	I1006 14:23:17.686688  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:17.687115  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:18.185964  649678 type.go:168] "Request Body" body=""
	I1006 14:23:18.186060  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:18.186514  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:18.686315  649678 type.go:168] "Request Body" body=""
	I1006 14:23:18.686410  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:18.686801  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:19.186674  649678 type.go:168] "Request Body" body=""
	I1006 14:23:19.186783  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:19.187188  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:19.187288  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:19.686017  649678 type.go:168] "Request Body" body=""
	I1006 14:23:19.686099  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:19.686535  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:19.749802  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:23:19.804037  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:19.807440  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:19.807591  649678 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:23:20.186477  649678 type.go:168] "Request Body" body=""
	I1006 14:23:20.186600  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:20.186989  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:20.686678  649678 type.go:168] "Request Body" body=""
	I1006 14:23:20.686763  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:20.687137  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:21.186775  649678 type.go:168] "Request Body" body=""
	I1006 14:23:21.186859  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:21.187276  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:21.187355  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:21.686079  649678 type.go:168] "Request Body" body=""
	I1006 14:23:21.686193  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:21.686605  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:22.186165  649678 type.go:168] "Request Body" body=""
	I1006 14:23:22.186276  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:22.186620  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:22.686240  649678 type.go:168] "Request Body" body=""
	I1006 14:23:22.686350  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:22.686763  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:23.186337  649678 type.go:168] "Request Body" body=""
	I1006 14:23:23.186473  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:23.186847  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:23.686573  649678 type.go:168] "Request Body" body=""
	I1006 14:23:23.686658  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:23.687072  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:23.687135  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:24.186778  649678 type.go:168] "Request Body" body=""
	I1006 14:23:24.186877  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:24.187302  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:24.685913  649678 type.go:168] "Request Body" body=""
	I1006 14:23:24.686009  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:24.686431  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:25.186039  649678 type.go:168] "Request Body" body=""
	I1006 14:23:25.186195  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:25.186614  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:25.686319  649678 type.go:168] "Request Body" body=""
	I1006 14:23:25.686432  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:25.686796  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:26.186364  649678 type.go:168] "Request Body" body=""
	I1006 14:23:26.186458  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:26.186842  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:26.186906  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:26.686757  649678 type.go:168] "Request Body" body=""
	I1006 14:23:26.686843  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:26.687175  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:27.186886  649678 type.go:168] "Request Body" body=""
	I1006 14:23:27.187004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:27.187400  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:27.685970  649678 type.go:168] "Request Body" body=""
	I1006 14:23:27.686086  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:27.686508  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:28.186097  649678 type.go:168] "Request Body" body=""
	I1006 14:23:28.186253  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:28.186667  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:28.686303  649678 type.go:168] "Request Body" body=""
	I1006 14:23:28.686394  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:28.686776  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:28.686869  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:29.186361  649678 type.go:168] "Request Body" body=""
	I1006 14:23:29.186534  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:29.186921  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:29.686617  649678 type.go:168] "Request Body" body=""
	I1006 14:23:29.686706  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:29.687093  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:30.185988  649678 type.go:168] "Request Body" body=""
	I1006 14:23:30.186107  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:30.186525  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:30.686173  649678 type.go:168] "Request Body" body=""
	I1006 14:23:30.686284  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:30.686704  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:31.186306  649678 type.go:168] "Request Body" body=""
	I1006 14:23:31.186416  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:31.186796  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:31.186865  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:31.686731  649678 type.go:168] "Request Body" body=""
	I1006 14:23:31.686818  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:31.687245  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:32.185868  649678 type.go:168] "Request Body" body=""
	I1006 14:23:32.185977  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:32.186468  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:32.686055  649678 type.go:168] "Request Body" body=""
	I1006 14:23:32.686249  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:32.686637  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:33.186245  649678 type.go:168] "Request Body" body=""
	I1006 14:23:33.186380  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:33.186741  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:33.686327  649678 type.go:168] "Request Body" body=""
	I1006 14:23:33.686421  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:33.686817  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:33.686882  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:34.186428  649678 type.go:168] "Request Body" body=""
	I1006 14:23:34.186519  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:34.186900  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:34.686601  649678 type.go:168] "Request Body" body=""
	I1006 14:23:34.686693  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:34.687174  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:35.186394  649678 type.go:168] "Request Body" body=""
	I1006 14:23:35.186495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:35.186830  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:35.686544  649678 type.go:168] "Request Body" body=""
	I1006 14:23:35.686676  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:35.687151  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:35.687249  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:36.186429  649678 type.go:168] "Request Body" body=""
	I1006 14:23:36.186525  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:36.186900  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:36.686821  649678 type.go:168] "Request Body" body=""
	I1006 14:23:36.686905  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:36.687296  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:37.185937  649678 type.go:168] "Request Body" body=""
	I1006 14:23:37.186041  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:37.186463  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:37.686057  649678 type.go:168] "Request Body" body=""
	I1006 14:23:37.686134  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:37.686537  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:38.186164  649678 type.go:168] "Request Body" body=""
	I1006 14:23:38.186301  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:38.186719  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:38.186784  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:38.212898  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:23:38.268129  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:38.271217  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:38.271448  649678 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:23:38.274179  649678 out.go:179] * Enabled addons: 
	I1006 14:23:38.275265  649678 addons.go:514] duration metric: took 1m48.200610857s for enable addons: enabled=[]
	I1006 14:23:38.686820  649678 type.go:168] "Request Body" body=""
	I1006 14:23:38.686904  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:38.687336  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:39.186242  649678 type.go:168] "Request Body" body=""
	I1006 14:23:39.186340  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:39.186728  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:39.686616  649678 type.go:168] "Request Body" body=""
	I1006 14:23:39.686713  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:39.687110  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:40.185923  649678 type.go:168] "Request Body" body=""
	I1006 14:23:40.186012  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:40.186440  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:40.686260  649678 type.go:168] "Request Body" body=""
	I1006 14:23:40.686360  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:40.686781  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:40.686870  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:41.186716  649678 type.go:168] "Request Body" body=""
	I1006 14:23:41.186846  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:41.187307  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:41.686117  649678 type.go:168] "Request Body" body=""
	I1006 14:23:41.686264  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:41.686651  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:42.186500  649678 type.go:168] "Request Body" body=""
	I1006 14:23:42.186601  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:42.187000  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:42.686853  649678 type.go:168] "Request Body" body=""
	I1006 14:23:42.686932  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:42.687293  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:42.687369  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:43.186081  649678 type.go:168] "Request Body" body=""
	I1006 14:23:43.186176  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:43.186615  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:43.686377  649678 type.go:168] "Request Body" body=""
	I1006 14:23:43.686461  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:43.686807  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:44.186682  649678 type.go:168] "Request Body" body=""
	I1006 14:23:44.186789  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:44.187155  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:44.685945  649678 type.go:168] "Request Body" body=""
	I1006 14:23:44.686029  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:44.686444  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:45.186221  649678 type.go:168] "Request Body" body=""
	I1006 14:23:45.186326  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:45.186717  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:45.186786  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:45.686681  649678 type.go:168] "Request Body" body=""
	I1006 14:23:45.686763  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:45.687135  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:46.185919  649678 type.go:168] "Request Body" body=""
	I1006 14:23:46.186010  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:46.186423  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:46.686119  649678 type.go:168] "Request Body" body=""
	I1006 14:23:46.686200  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:46.686594  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:47.186343  649678 type.go:168] "Request Body" body=""
	I1006 14:23:47.186428  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:47.186751  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:47.186812  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:47.686582  649678 type.go:168] "Request Body" body=""
	I1006 14:23:47.686670  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:47.687029  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:48.186905  649678 type.go:168] "Request Body" body=""
	I1006 14:23:48.187010  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:48.187415  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:48.686173  649678 type.go:168] "Request Body" body=""
	I1006 14:23:48.686274  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:48.686614  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:49.186426  649678 type.go:168] "Request Body" body=""
	I1006 14:23:49.186559  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:49.187170  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:49.187283  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:49.686055  649678 type.go:168] "Request Body" body=""
	I1006 14:23:49.686162  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:49.686567  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:50.186460  649678 type.go:168] "Request Body" body=""
	I1006 14:23:50.186578  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:50.186980  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:50.686657  649678 type.go:168] "Request Body" body=""
	I1006 14:23:50.686737  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:50.687102  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:51.186780  649678 type.go:168] "Request Body" body=""
	I1006 14:23:51.186879  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:51.187290  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:51.187357  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:51.686055  649678 type.go:168] "Request Body" body=""
	I1006 14:23:51.686146  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:51.686562  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:52.186152  649678 type.go:168] "Request Body" body=""
	I1006 14:23:52.186274  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:52.186663  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:52.686295  649678 type.go:168] "Request Body" body=""
	I1006 14:23:52.686384  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:52.686751  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:53.186373  649678 type.go:168] "Request Body" body=""
	I1006 14:23:53.186476  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:53.186876  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:53.686514  649678 type.go:168] "Request Body" body=""
	I1006 14:23:53.686604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:53.686953  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:53.687018  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:54.186624  649678 type.go:168] "Request Body" body=""
	I1006 14:23:54.186724  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:54.187084  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:54.686709  649678 type.go:168] "Request Body" body=""
	I1006 14:23:54.686798  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:54.687167  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:55.186814  649678 type.go:168] "Request Body" body=""
	I1006 14:23:55.186908  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:55.187298  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:55.685884  649678 type.go:168] "Request Body" body=""
	I1006 14:23:55.685966  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:55.686336  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:56.185959  649678 type.go:168] "Request Body" body=""
	I1006 14:23:56.186053  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:56.186474  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:56.186543  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:56.686257  649678 type.go:168] "Request Body" body=""
	I1006 14:23:56.686340  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:56.686714  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:57.186250  649678 type.go:168] "Request Body" body=""
	I1006 14:23:57.186346  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:57.186713  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:57.686338  649678 type.go:168] "Request Body" body=""
	I1006 14:23:57.686411  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:57.686758  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:58.186346  649678 type.go:168] "Request Body" body=""
	I1006 14:23:58.186462  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:58.186853  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:58.186925  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:58.686513  649678 type.go:168] "Request Body" body=""
	I1006 14:23:58.686597  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:58.686941  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:59.186651  649678 type.go:168] "Request Body" body=""
	I1006 14:23:59.186746  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:59.187144  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:59.686847  649678 type.go:168] "Request Body" body=""
	I1006 14:23:59.686928  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:59.687299  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:00.186276  649678 type.go:168] "Request Body" body=""
	I1006 14:24:00.186374  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:00.186780  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:00.686385  649678 type.go:168] "Request Body" body=""
	I1006 14:24:00.686467  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:00.686835  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:00.686902  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:01.186504  649678 type.go:168] "Request Body" body=""
	I1006 14:24:01.186604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:01.187011  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:01.686898  649678 type.go:168] "Request Body" body=""
	I1006 14:24:01.686984  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:01.687358  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:02.185992  649678 type.go:168] "Request Body" body=""
	I1006 14:24:02.186098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:02.186510  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:02.686060  649678 type.go:168] "Request Body" body=""
	I1006 14:24:02.686175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:02.686581  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:03.186144  649678 type.go:168] "Request Body" body=""
	I1006 14:24:03.186269  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:03.186664  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:03.186735  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:03.686298  649678 type.go:168] "Request Body" body=""
	I1006 14:24:03.686391  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:03.686764  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:04.186331  649678 type.go:168] "Request Body" body=""
	I1006 14:24:04.186436  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:04.186806  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:04.686453  649678 type.go:168] "Request Body" body=""
	I1006 14:24:04.686539  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:04.686904  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:05.186584  649678 type.go:168] "Request Body" body=""
	I1006 14:24:05.186677  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:05.187042  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:05.187118  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:05.686754  649678 type.go:168] "Request Body" body=""
	I1006 14:24:05.686838  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:05.687249  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:06.186882  649678 type.go:168] "Request Body" body=""
	I1006 14:24:06.186978  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:06.187376  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:06.686265  649678 type.go:168] "Request Body" body=""
	I1006 14:24:06.686360  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:06.686739  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:07.186388  649678 type.go:168] "Request Body" body=""
	I1006 14:24:07.186485  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:07.186890  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:07.686565  649678 type.go:168] "Request Body" body=""
	I1006 14:24:07.686740  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:07.687177  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:07.687301  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:08.186834  649678 type.go:168] "Request Body" body=""
	I1006 14:24:08.186933  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:08.187338  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:08.685923  649678 type.go:168] "Request Body" body=""
	I1006 14:24:08.686004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:08.686400  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:09.185982  649678 type.go:168] "Request Body" body=""
	I1006 14:24:09.186075  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:09.186486  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:09.686071  649678 type.go:168] "Request Body" body=""
	I1006 14:24:09.686147  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:09.686609  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:10.186339  649678 type.go:168] "Request Body" body=""
	I1006 14:24:10.186435  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:10.186832  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:10.186914  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:10.686410  649678 type.go:168] "Request Body" body=""
	I1006 14:24:10.686491  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:10.686878  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:11.186499  649678 type.go:168] "Request Body" body=""
	I1006 14:24:11.186603  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:11.186987  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:11.686993  649678 type.go:168] "Request Body" body=""
	I1006 14:24:11.687075  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:11.687486  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:12.186044  649678 type.go:168] "Request Body" body=""
	I1006 14:24:12.186144  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:12.186531  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:12.686100  649678 type.go:168] "Request Body" body=""
	I1006 14:24:12.686192  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:12.686612  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:12.686688  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:13.186239  649678 type.go:168] "Request Body" body=""
	I1006 14:24:13.186332  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:13.186729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:13.686339  649678 type.go:168] "Request Body" body=""
	I1006 14:24:13.686426  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:13.686827  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:14.186505  649678 type.go:168] "Request Body" body=""
	I1006 14:24:14.186600  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:14.186972  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:14.686706  649678 type.go:168] "Request Body" body=""
	I1006 14:24:14.686793  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:14.687271  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:14.687344  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:15.186857  649678 type.go:168] "Request Body" body=""
	I1006 14:24:15.186949  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:15.187318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:15.685900  649678 type.go:168] "Request Body" body=""
	I1006 14:24:15.686005  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:15.686504  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:16.186073  649678 type.go:168] "Request Body" body=""
	I1006 14:24:16.186167  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:16.186557  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:16.686576  649678 type.go:168] "Request Body" body=""
	I1006 14:24:16.686657  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:16.687039  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:17.186833  649678 type.go:168] "Request Body" body=""
	I1006 14:24:17.186929  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:17.187333  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:17.187429  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:17.685958  649678 type.go:168] "Request Body" body=""
	I1006 14:24:17.686060  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:17.686506  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:18.186267  649678 type.go:168] "Request Body" body=""
	I1006 14:24:18.186350  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:18.186723  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:18.686325  649678 type.go:168] "Request Body" body=""
	I1006 14:24:18.686420  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:18.686789  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:19.186406  649678 type.go:168] "Request Body" body=""
	I1006 14:24:19.186488  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:19.186868  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:19.686567  649678 type.go:168] "Request Body" body=""
	I1006 14:24:19.686656  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:19.687081  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:19.687166  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:20.185993  649678 type.go:168] "Request Body" body=""
	I1006 14:24:20.186081  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:20.186515  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:20.686127  649678 type.go:168] "Request Body" body=""
	I1006 14:24:20.686261  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:20.686672  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:21.186285  649678 type.go:168] "Request Body" body=""
	I1006 14:24:21.186378  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:21.186760  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:21.686689  649678 type.go:168] "Request Body" body=""
	I1006 14:24:21.686806  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:21.687270  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:21.687343  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:22.186875  649678 type.go:168] "Request Body" body=""
	I1006 14:24:22.186957  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:22.187340  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:22.685917  649678 type.go:168] "Request Body" body=""
	I1006 14:24:22.686001  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:22.686421  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:23.186006  649678 type.go:168] "Request Body" body=""
	I1006 14:24:23.186098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:23.186524  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:23.686088  649678 type.go:168] "Request Body" body=""
	I1006 14:24:23.686169  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:23.686561  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:24.186157  649678 type.go:168] "Request Body" body=""
	I1006 14:24:24.186277  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:24.186678  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:24.186752  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:24.686265  649678 type.go:168] "Request Body" body=""
	I1006 14:24:24.686340  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:24.686724  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:25.186308  649678 type.go:168] "Request Body" body=""
	I1006 14:24:25.186403  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:25.186836  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:25.686416  649678 type.go:168] "Request Body" body=""
	I1006 14:24:25.686502  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:25.686869  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:26.186513  649678 type.go:168] "Request Body" body=""
	I1006 14:24:26.186607  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:26.186966  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:26.187036  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:26.686743  649678 type.go:168] "Request Body" body=""
	I1006 14:24:26.686828  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:26.687232  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:27.186830  649678 type.go:168] "Request Body" body=""
	I1006 14:24:27.186956  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:27.187284  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:27.685892  649678 type.go:168] "Request Body" body=""
	I1006 14:24:27.685969  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:27.686452  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:28.185994  649678 type.go:168] "Request Body" body=""
	I1006 14:24:28.186085  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:28.186516  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:28.686092  649678 type.go:168] "Request Body" body=""
	I1006 14:24:28.686226  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:28.686600  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:28.686667  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:29.186232  649678 type.go:168] "Request Body" body=""
	I1006 14:24:29.186318  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:29.186686  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:29.686272  649678 type.go:168] "Request Body" body=""
	I1006 14:24:29.686385  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:29.686803  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:30.186682  649678 type.go:168] "Request Body" body=""
	I1006 14:24:30.186770  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:30.187128  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:30.686899  649678 type.go:168] "Request Body" body=""
	I1006 14:24:30.687000  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:30.687446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:30.687521  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:31.186005  649678 type.go:168] "Request Body" body=""
	I1006 14:24:31.186092  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:31.186508  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:31.686473  649678 type.go:168] "Request Body" body=""
	I1006 14:24:31.686563  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:31.686985  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:32.186673  649678 type.go:168] "Request Body" body=""
	I1006 14:24:32.186756  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:32.187112  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:32.686831  649678 type.go:168] "Request Body" body=""
	I1006 14:24:32.686918  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:32.687304  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:33.185919  649678 type.go:168] "Request Body" body=""
	I1006 14:24:33.186004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:33.186403  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:33.186477  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:33.685961  649678 type.go:168] "Request Body" body=""
	I1006 14:24:33.686072  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:33.686452  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:34.186033  649678 type.go:168] "Request Body" body=""
	I1006 14:24:34.186116  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:34.186521  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:34.686098  649678 type.go:168] "Request Body" body=""
	I1006 14:24:34.686233  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:34.686619  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:35.186193  649678 type.go:168] "Request Body" body=""
	I1006 14:24:35.186290  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:35.186663  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:35.186737  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:35.686293  649678 type.go:168] "Request Body" body=""
	I1006 14:24:35.686406  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:35.686798  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:36.186344  649678 type.go:168] "Request Body" body=""
	I1006 14:24:36.186419  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:36.186746  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:36.686564  649678 type.go:168] "Request Body" body=""
	I1006 14:24:36.686654  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:36.687044  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:37.186671  649678 type.go:168] "Request Body" body=""
	I1006 14:24:37.186749  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:37.187101  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:37.187190  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:37.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:24:37.686844  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:37.687282  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:38.186015  649678 type.go:168] "Request Body" body=""
	I1006 14:24:38.186100  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:38.186512  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:38.686083  649678 type.go:168] "Request Body" body=""
	I1006 14:24:38.686160  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:38.686534  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:39.186147  649678 type.go:168] "Request Body" body=""
	I1006 14:24:39.186264  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:39.186629  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:39.686351  649678 type.go:168] "Request Body" body=""
	I1006 14:24:39.686445  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:39.686834  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:39.686903  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:40.186723  649678 type.go:168] "Request Body" body=""
	I1006 14:24:40.186824  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:40.187257  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:40.686897  649678 type.go:168] "Request Body" body=""
	I1006 14:24:40.686983  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:40.687415  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:41.186000  649678 type.go:168] "Request Body" body=""
	I1006 14:24:41.186080  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:41.186497  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:41.686311  649678 type.go:168] "Request Body" body=""
	I1006 14:24:41.686398  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:41.686747  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:42.186394  649678 type.go:168] "Request Body" body=""
	I1006 14:24:42.186477  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:42.186829  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:42.186909  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:42.686365  649678 type.go:168] "Request Body" body=""
	I1006 14:24:42.686458  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:42.686828  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:43.186364  649678 type.go:168] "Request Body" body=""
	I1006 14:24:43.186453  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:43.186835  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:43.686404  649678 type.go:168] "Request Body" body=""
	I1006 14:24:43.686479  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:43.686829  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:44.186419  649678 type.go:168] "Request Body" body=""
	I1006 14:24:44.186497  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:44.186840  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:44.686503  649678 type.go:168] "Request Body" body=""
	I1006 14:24:44.686579  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:44.686908  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:44.686976  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:45.186546  649678 type.go:168] "Request Body" body=""
	I1006 14:24:45.186633  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:45.186973  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:45.686633  649678 type.go:168] "Request Body" body=""
	I1006 14:24:45.686722  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:45.687066  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:46.186715  649678 type.go:168] "Request Body" body=""
	I1006 14:24:46.186798  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:46.187164  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:46.686921  649678 type.go:168] "Request Body" body=""
	I1006 14:24:46.687008  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:46.687441  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:46.687511  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:47.186093  649678 type.go:168] "Request Body" body=""
	I1006 14:24:47.186175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:47.186548  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:47.686128  649678 type.go:168] "Request Body" body=""
	I1006 14:24:47.686233  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:47.686613  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:48.186260  649678 type.go:168] "Request Body" body=""
	I1006 14:24:48.186345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:48.186715  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:48.686317  649678 type.go:168] "Request Body" body=""
	I1006 14:24:48.686416  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:48.686787  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:49.186383  649678 type.go:168] "Request Body" body=""
	I1006 14:24:49.186483  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:49.186862  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:49.186934  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:49.686547  649678 type.go:168] "Request Body" body=""
	I1006 14:24:49.686630  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:49.687018  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:50.186932  649678 type.go:168] "Request Body" body=""
	I1006 14:24:50.187020  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:50.187392  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:50.685995  649678 type.go:168] "Request Body" body=""
	I1006 14:24:50.686087  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:50.686639  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:51.186241  649678 type.go:168] "Request Body" body=""
	I1006 14:24:51.186321  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:51.186677  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:51.686524  649678 type.go:168] "Request Body" body=""
	I1006 14:24:51.686604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:51.686971  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:51.687045  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:52.186636  649678 type.go:168] "Request Body" body=""
	I1006 14:24:52.186724  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:52.187108  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:52.686753  649678 type.go:168] "Request Body" body=""
	I1006 14:24:52.686831  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:52.687267  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:53.185896  649678 type.go:168] "Request Body" body=""
	I1006 14:24:53.185979  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:53.186366  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:53.685914  649678 type.go:168] "Request Body" body=""
	I1006 14:24:53.685990  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:53.686334  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:54.185922  649678 type.go:168] "Request Body" body=""
	I1006 14:24:54.186002  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:54.186408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:54.186489  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:54.685967  649678 type.go:168] "Request Body" body=""
	I1006 14:24:54.686051  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:54.686451  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:55.186040  649678 type.go:168] "Request Body" body=""
	I1006 14:24:55.186122  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:55.186477  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:55.686036  649678 type.go:168] "Request Body" body=""
	I1006 14:24:55.686113  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:55.686480  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:56.186026  649678 type.go:168] "Request Body" body=""
	I1006 14:24:56.186104  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:56.186478  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:56.186550  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:56.686248  649678 type.go:168] "Request Body" body=""
	I1006 14:24:56.686329  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:56.686693  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:57.186234  649678 type.go:168] "Request Body" body=""
	I1006 14:24:57.186315  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:57.186630  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:57.686283  649678 type.go:168] "Request Body" body=""
	I1006 14:24:57.686402  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:57.686814  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:58.186365  649678 type.go:168] "Request Body" body=""
	I1006 14:24:58.186450  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:58.186794  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:58.186858  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:58.686485  649678 type.go:168] "Request Body" body=""
	I1006 14:24:58.686625  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:58.687000  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:59.186645  649678 type.go:168] "Request Body" body=""
	I1006 14:24:59.186728  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:59.187067  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:59.686701  649678 type.go:168] "Request Body" body=""
	I1006 14:24:59.686778  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:59.687158  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:00.185971  649678 type.go:168] "Request Body" body=""
	I1006 14:25:00.186051  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:00.186405  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:00.686037  649678 type.go:168] "Request Body" body=""
	I1006 14:25:00.686117  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:00.686528  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:00.686606  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:01.186098  649678 type.go:168] "Request Body" body=""
	I1006 14:25:01.186186  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:01.186639  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:01.686574  649678 type.go:168] "Request Body" body=""
	I1006 14:25:01.686664  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:01.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:02.186731  649678 type.go:168] "Request Body" body=""
	I1006 14:25:02.186819  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:02.187259  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:02.685880  649678 type.go:168] "Request Body" body=""
	I1006 14:25:02.685972  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:02.686460  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:03.186037  649678 type.go:168] "Request Body" body=""
	I1006 14:25:03.186117  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:03.186526  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:03.186595  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:03.686186  649678 type.go:168] "Request Body" body=""
	I1006 14:25:03.686282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:03.686638  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:04.186251  649678 type.go:168] "Request Body" body=""
	I1006 14:25:04.186325  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:04.186672  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:04.686261  649678 type.go:168] "Request Body" body=""
	I1006 14:25:04.686346  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:04.686697  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:05.186293  649678 type.go:168] "Request Body" body=""
	I1006 14:25:05.186374  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:05.186780  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:05.186857  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:05.686332  649678 type.go:168] "Request Body" body=""
	I1006 14:25:05.686416  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:05.686772  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:06.186370  649678 type.go:168] "Request Body" body=""
	I1006 14:25:06.186449  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:06.186819  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:06.686670  649678 type.go:168] "Request Body" body=""
	I1006 14:25:06.686749  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:06.687114  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:07.186765  649678 type.go:168] "Request Body" body=""
	I1006 14:25:07.186854  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:07.187255  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:07.187328  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:07.686866  649678 type.go:168] "Request Body" body=""
	I1006 14:25:07.686945  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:07.687337  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:08.185991  649678 type.go:168] "Request Body" body=""
	I1006 14:25:08.186073  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:08.186473  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:08.686026  649678 type.go:168] "Request Body" body=""
	I1006 14:25:08.686101  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:08.686467  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:09.186027  649678 type.go:168] "Request Body" body=""
	I1006 14:25:09.186117  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:09.186491  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:09.686131  649678 type.go:168] "Request Body" body=""
	I1006 14:25:09.686218  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:09.686554  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:09.686624  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:10.186421  649678 type.go:168] "Request Body" body=""
	I1006 14:25:10.186509  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:10.186885  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:10.686589  649678 type.go:168] "Request Body" body=""
	I1006 14:25:10.686673  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:10.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:11.186451  649678 type.go:168] "Request Body" body=""
	I1006 14:25:11.186534  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:11.186908  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:11.686874  649678 type.go:168] "Request Body" body=""
	I1006 14:25:11.686958  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:11.687404  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:11.687478  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:12.186004  649678 type.go:168] "Request Body" body=""
	I1006 14:25:12.186089  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:12.186488  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:12.686071  649678 type.go:168] "Request Body" body=""
	I1006 14:25:12.686175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:12.686583  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:13.186311  649678 type.go:168] "Request Body" body=""
	I1006 14:25:13.186394  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:13.186794  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:13.686469  649678 type.go:168] "Request Body" body=""
	I1006 14:25:13.686560  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:13.686955  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:14.186674  649678 type.go:168] "Request Body" body=""
	I1006 14:25:14.186764  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:14.187198  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:14.187305  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:14.686830  649678 type.go:168] "Request Body" body=""
	I1006 14:25:14.686915  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:14.687318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:15.185883  649678 type.go:168] "Request Body" body=""
	I1006 14:25:15.185963  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:15.186381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:15.685988  649678 type.go:168] "Request Body" body=""
	I1006 14:25:15.686075  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:15.686471  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:16.186057  649678 type.go:168] "Request Body" body=""
	I1006 14:25:16.186159  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:16.186628  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:16.686506  649678 type.go:168] "Request Body" body=""
	I1006 14:25:16.686586  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:16.686922  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:16.686991  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:17.186686  649678 type.go:168] "Request Body" body=""
	I1006 14:25:17.186779  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:17.187190  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:17.686871  649678 type.go:168] "Request Body" body=""
	I1006 14:25:17.686958  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:17.687378  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:18.185930  649678 type.go:168] "Request Body" body=""
	I1006 14:25:18.186011  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:18.186362  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:18.686006  649678 type.go:168] "Request Body" body=""
	I1006 14:25:18.686091  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:18.686522  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:19.186154  649678 type.go:168] "Request Body" body=""
	I1006 14:25:19.186270  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:19.186661  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:19.186738  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:19.686272  649678 type.go:168] "Request Body" body=""
	I1006 14:25:19.686357  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:19.686722  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:20.186620  649678 type.go:168] "Request Body" body=""
	I1006 14:25:20.186712  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:20.187085  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:20.686732  649678 type.go:168] "Request Body" body=""
	I1006 14:25:20.686813  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:20.687200  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:21.186886  649678 type.go:168] "Request Body" body=""
	I1006 14:25:21.186971  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:21.187421  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:21.187498  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:21.686192  649678 type.go:168] "Request Body" body=""
	I1006 14:25:21.686313  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:21.686703  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:22.186337  649678 type.go:168] "Request Body" body=""
	I1006 14:25:22.186443  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:22.186816  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:22.686392  649678 type.go:168] "Request Body" body=""
	I1006 14:25:22.686470  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:22.686872  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:23.186538  649678 type.go:168] "Request Body" body=""
	I1006 14:25:23.186623  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:23.186990  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:23.686645  649678 type.go:168] "Request Body" body=""
	I1006 14:25:23.686745  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:23.687147  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:23.687255  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:24.186838  649678 type.go:168] "Request Body" body=""
	I1006 14:25:24.186917  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:24.187309  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:24.685862  649678 type.go:168] "Request Body" body=""
	I1006 14:25:24.685944  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:24.686370  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:25.185903  649678 type.go:168] "Request Body" body=""
	I1006 14:25:25.185979  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:25.186373  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:25.685951  649678 type.go:168] "Request Body" body=""
	I1006 14:25:25.686032  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:25.686450  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:26.186018  649678 type.go:168] "Request Body" body=""
	I1006 14:25:26.186098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:26.186497  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:26.186566  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:26.686293  649678 type.go:168] "Request Body" body=""
	I1006 14:25:26.686378  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:26.686746  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:27.186364  649678 type.go:168] "Request Body" body=""
	I1006 14:25:27.186454  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:27.186827  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:27.686418  649678 type.go:168] "Request Body" body=""
	I1006 14:25:27.686503  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:27.686844  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:28.186581  649678 type.go:168] "Request Body" body=""
	I1006 14:25:28.186676  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:28.187085  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:28.187196  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:28.686665  649678 type.go:168] "Request Body" body=""
	I1006 14:25:28.686737  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:28.687051  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:29.186712  649678 type.go:168] "Request Body" body=""
	I1006 14:25:29.186801  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:29.187161  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:29.685861  649678 type.go:168] "Request Body" body=""
	I1006 14:25:29.685951  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:29.686323  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:30.186241  649678 type.go:168] "Request Body" body=""
	I1006 14:25:30.186336  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:30.186725  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:30.686347  649678 type.go:168] "Request Body" body=""
	I1006 14:25:30.686438  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:30.686799  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:30.686867  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:31.186356  649678 type.go:168] "Request Body" body=""
	I1006 14:25:31.186436  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:31.186790  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:31.686720  649678 type.go:168] "Request Body" body=""
	I1006 14:25:31.686801  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:31.687239  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:32.186431  649678 type.go:168] "Request Body" body=""
	I1006 14:25:32.186515  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:32.186873  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:32.686520  649678 type.go:168] "Request Body" body=""
	I1006 14:25:32.686601  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:32.686977  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:32.687047  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:33.186626  649678 type.go:168] "Request Body" body=""
	I1006 14:25:33.186710  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:33.187075  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:33.686716  649678 type.go:168] "Request Body" body=""
	I1006 14:25:33.686805  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:33.687167  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:34.186823  649678 type.go:168] "Request Body" body=""
	I1006 14:25:34.186903  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:34.187273  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:34.685846  649678 type.go:168] "Request Body" body=""
	I1006 14:25:34.685928  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:34.686316  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:35.185913  649678 type.go:168] "Request Body" body=""
	I1006 14:25:35.186011  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:35.186468  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:35.186536  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:35.686056  649678 type.go:168] "Request Body" body=""
	I1006 14:25:35.686142  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:35.686600  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:36.186122  649678 type.go:168] "Request Body" body=""
	I1006 14:25:36.186200  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:36.186601  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:36.686430  649678 type.go:168] "Request Body" body=""
	I1006 14:25:36.686510  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:36.686854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:37.186453  649678 type.go:168] "Request Body" body=""
	I1006 14:25:37.186544  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:37.186881  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:37.186946  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:37.686555  649678 type.go:168] "Request Body" body=""
	I1006 14:25:37.686635  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:37.686983  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:38.186591  649678 type.go:168] "Request Body" body=""
	I1006 14:25:38.186672  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:38.187012  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:38.686677  649678 type.go:168] "Request Body" body=""
	I1006 14:25:38.686752  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:38.687074  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:39.186406  649678 type.go:168] "Request Body" body=""
	I1006 14:25:39.186486  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:39.186779  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:39.686380  649678 type.go:168] "Request Body" body=""
	I1006 14:25:39.686456  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:39.686788  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:39.686849  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:40.186552  649678 type.go:168] "Request Body" body=""
	I1006 14:25:40.186636  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:40.186983  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:40.686686  649678 type.go:168] "Request Body" body=""
	I1006 14:25:40.686772  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:40.687136  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:41.186786  649678 type.go:168] "Request Body" body=""
	I1006 14:25:41.186883  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:41.187296  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:41.686115  649678 type.go:168] "Request Body" body=""
	I1006 14:25:41.686197  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:41.686611  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:42.186247  649678 type.go:168] "Request Body" body=""
	I1006 14:25:42.186349  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:42.186752  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:42.186818  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:42.686348  649678 type.go:168] "Request Body" body=""
	I1006 14:25:42.686429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:42.686809  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:43.186383  649678 type.go:168] "Request Body" body=""
	I1006 14:25:43.186476  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:43.186825  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:43.686373  649678 type.go:168] "Request Body" body=""
	I1006 14:25:43.686447  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:43.686785  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:44.186380  649678 type.go:168] "Request Body" body=""
	I1006 14:25:44.186471  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:44.186817  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:44.186878  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:44.686508  649678 type.go:168] "Request Body" body=""
	I1006 14:25:44.686586  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:44.686949  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:45.186631  649678 type.go:168] "Request Body" body=""
	I1006 14:25:45.186709  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:45.187070  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:45.686683  649678 type.go:168] "Request Body" body=""
	I1006 14:25:45.686760  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:45.687117  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:46.186771  649678 type.go:168] "Request Body" body=""
	I1006 14:25:46.186856  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:46.187161  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:46.187239  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:46.685960  649678 type.go:168] "Request Body" body=""
	I1006 14:25:46.686053  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:46.686491  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:47.186117  649678 type.go:168] "Request Body" body=""
	I1006 14:25:47.186232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:47.186563  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:47.686262  649678 type.go:168] "Request Body" body=""
	I1006 14:25:47.686353  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:47.686735  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:48.186344  649678 type.go:168] "Request Body" body=""
	I1006 14:25:48.186436  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:48.186775  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:48.686380  649678 type.go:168] "Request Body" body=""
	I1006 14:25:48.686477  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:48.686837  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:48.686901  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:49.186520  649678 type.go:168] "Request Body" body=""
	I1006 14:25:49.186599  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:49.186960  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:49.686576  649678 type.go:168] "Request Body" body=""
	I1006 14:25:49.686696  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:49.687078  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:50.186881  649678 type.go:168] "Request Body" body=""
	I1006 14:25:50.186973  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:50.187437  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:50.685990  649678 type.go:168] "Request Body" body=""
	I1006 14:25:50.686074  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:50.686473  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:51.186300  649678 type.go:168] "Request Body" body=""
	I1006 14:25:51.186379  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:51.186743  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:51.186811  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:51.686703  649678 type.go:168] "Request Body" body=""
	I1006 14:25:51.686798  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:51.687173  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:52.186898  649678 type.go:168] "Request Body" body=""
	I1006 14:25:52.186995  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:52.187412  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:52.686051  649678 type.go:168] "Request Body" body=""
	I1006 14:25:52.686131  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:52.686542  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:53.186148  649678 type.go:168] "Request Body" body=""
	I1006 14:25:53.186271  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:53.186618  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:53.686257  649678 type.go:168] "Request Body" body=""
	I1006 14:25:53.686333  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:53.686629  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:53.686692  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:54.186270  649678 type.go:168] "Request Body" body=""
	I1006 14:25:54.186349  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:54.186708  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:54.686271  649678 type.go:168] "Request Body" body=""
	I1006 14:25:54.686350  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:54.686763  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:55.186342  649678 type.go:168] "Request Body" body=""
	I1006 14:25:55.186429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:55.186784  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:55.686364  649678 type.go:168] "Request Body" body=""
	I1006 14:25:55.686460  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:55.686892  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:55.686972  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:56.186543  649678 type.go:168] "Request Body" body=""
	I1006 14:25:56.186621  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:56.186957  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:56.686715  649678 type.go:168] "Request Body" body=""
	I1006 14:25:56.686790  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:56.687141  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:57.186851  649678 type.go:168] "Request Body" body=""
	I1006 14:25:57.186936  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:57.187306  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:57.686906  649678 type.go:168] "Request Body" body=""
	I1006 14:25:57.686983  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:57.687342  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:57.687412  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:58.185932  649678 type.go:168] "Request Body" body=""
	I1006 14:25:58.186017  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:58.186400  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:58.685929  649678 type.go:168] "Request Body" body=""
	I1006 14:25:58.686006  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:58.686337  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:59.185922  649678 type.go:168] "Request Body" body=""
	I1006 14:25:59.186001  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:59.186386  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:59.685924  649678 type.go:168] "Request Body" body=""
	I1006 14:25:59.686004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:59.686375  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:00.186296  649678 type.go:168] "Request Body" body=""
	I1006 14:26:00.186378  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:00.186687  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:00.186765  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:00.686277  649678 type.go:168] "Request Body" body=""
	I1006 14:26:00.686360  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:00.686729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:01.186343  649678 type.go:168] "Request Body" body=""
	I1006 14:26:01.186429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:01.186799  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:01.686640  649678 type.go:168] "Request Body" body=""
	I1006 14:26:01.686731  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:01.687113  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:02.186812  649678 type.go:168] "Request Body" body=""
	I1006 14:26:02.186901  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:02.187298  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:02.187363  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:02.686912  649678 type.go:168] "Request Body" body=""
	I1006 14:26:02.686991  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:02.687387  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:03.186002  649678 type.go:168] "Request Body" body=""
	I1006 14:26:03.186084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:03.186473  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:03.685977  649678 type.go:168] "Request Body" body=""
	I1006 14:26:03.686048  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:03.686381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:04.185981  649678 type.go:168] "Request Body" body=""
	I1006 14:26:04.186057  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:04.186423  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:04.685971  649678 type.go:168] "Request Body" body=""
	I1006 14:26:04.686060  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:04.686445  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:04.686508  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:05.186070  649678 type.go:168] "Request Body" body=""
	I1006 14:26:05.186157  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:05.186570  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:05.686148  649678 type.go:168] "Request Body" body=""
	I1006 14:26:05.686264  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:05.686629  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:06.186273  649678 type.go:168] "Request Body" body=""
	I1006 14:26:06.186358  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:06.186714  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:06.686539  649678 type.go:168] "Request Body" body=""
	I1006 14:26:06.686626  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:06.686991  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:06.687057  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:07.186691  649678 type.go:168] "Request Body" body=""
	I1006 14:26:07.186766  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:07.187071  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:07.686715  649678 type.go:168] "Request Body" body=""
	I1006 14:26:07.686797  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:07.687168  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:08.186877  649678 type.go:168] "Request Body" body=""
	I1006 14:26:08.186969  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:08.187376  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:08.685874  649678 type.go:168] "Request Body" body=""
	I1006 14:26:08.685947  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:08.686343  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:09.185901  649678 type.go:168] "Request Body" body=""
	I1006 14:26:09.185986  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:09.186361  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:09.186422  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:09.685934  649678 type.go:168] "Request Body" body=""
	I1006 14:26:09.686008  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:09.686381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:10.186337  649678 type.go:168] "Request Body" body=""
	I1006 14:26:10.186418  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:10.186799  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:10.686458  649678 type.go:168] "Request Body" body=""
	I1006 14:26:10.686543  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:10.686962  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:11.186624  649678 type.go:168] "Request Body" body=""
	I1006 14:26:11.186717  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:11.187101  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:11.187175  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:11.685850  649678 type.go:168] "Request Body" body=""
	I1006 14:26:11.685927  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:11.686323  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:12.185918  649678 type.go:168] "Request Body" body=""
	I1006 14:26:12.185998  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:12.186408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:12.686005  649678 type.go:168] "Request Body" body=""
	I1006 14:26:12.686089  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:12.686517  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:13.186107  649678 type.go:168] "Request Body" body=""
	I1006 14:26:13.186230  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:13.186588  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:13.686197  649678 type.go:168] "Request Body" body=""
	I1006 14:26:13.686355  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:13.686711  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:13.686772  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:14.186309  649678 type.go:168] "Request Body" body=""
	I1006 14:26:14.186392  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:14.186749  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:14.686366  649678 type.go:168] "Request Body" body=""
	I1006 14:26:14.686450  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:14.686778  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:15.185991  649678 type.go:168] "Request Body" body=""
	I1006 14:26:15.186103  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:15.186529  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:15.686135  649678 type.go:168] "Request Body" body=""
	I1006 14:26:15.686243  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:15.686610  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:16.186323  649678 type.go:168] "Request Body" body=""
	I1006 14:26:16.186429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:16.186768  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:16.186838  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:16.686609  649678 type.go:168] "Request Body" body=""
	I1006 14:26:16.686694  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:16.687041  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:17.186702  649678 type.go:168] "Request Body" body=""
	I1006 14:26:17.186792  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:17.187231  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:17.686866  649678 type.go:168] "Request Body" body=""
	I1006 14:26:17.686950  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:17.687324  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:18.185952  649678 type.go:168] "Request Body" body=""
	I1006 14:26:18.186030  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:18.186428  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:18.685978  649678 type.go:168] "Request Body" body=""
	I1006 14:26:18.686051  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:18.686440  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:18.686507  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:19.186006  649678 type.go:168] "Request Body" body=""
	I1006 14:26:19.186087  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:19.186501  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:19.686063  649678 type.go:168] "Request Body" body=""
	I1006 14:26:19.686139  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:19.686531  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:20.186356  649678 type.go:168] "Request Body" body=""
	I1006 14:26:20.186443  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:20.186802  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:20.686408  649678 type.go:168] "Request Body" body=""
	I1006 14:26:20.686495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:20.686850  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:20.686922  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:21.186511  649678 type.go:168] "Request Body" body=""
	I1006 14:26:21.186587  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:21.186942  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:21.686813  649678 type.go:168] "Request Body" body=""
	I1006 14:26:21.686900  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:21.687313  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:22.185849  649678 type.go:168] "Request Body" body=""
	I1006 14:26:22.185931  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:22.186339  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:22.685929  649678 type.go:168] "Request Body" body=""
	I1006 14:26:22.686007  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:22.686413  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:23.186016  649678 type.go:168] "Request Body" body=""
	I1006 14:26:23.186102  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:23.186494  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:23.186565  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:23.686035  649678 type.go:168] "Request Body" body=""
	I1006 14:26:23.686107  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:23.686482  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:24.186086  649678 type.go:168] "Request Body" body=""
	I1006 14:26:24.186175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:24.186554  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:24.686126  649678 type.go:168] "Request Body" body=""
	I1006 14:26:24.686237  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:24.686577  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:25.186280  649678 type.go:168] "Request Body" body=""
	I1006 14:26:25.186363  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:25.186729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:25.186793  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:25.686357  649678 type.go:168] "Request Body" body=""
	I1006 14:26:25.686450  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:25.686832  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:26.186509  649678 type.go:168] "Request Body" body=""
	I1006 14:26:26.186599  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:26.186933  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:26.686731  649678 type.go:168] "Request Body" body=""
	I1006 14:26:26.686807  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:26.687178  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:27.186830  649678 type.go:168] "Request Body" body=""
	I1006 14:26:27.186916  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:27.187303  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:27.187367  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:27.685989  649678 type.go:168] "Request Body" body=""
	I1006 14:26:27.686079  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:27.686515  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:28.186104  649678 type.go:168] "Request Body" body=""
	I1006 14:26:28.186234  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:28.186665  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:28.686340  649678 type.go:168] "Request Body" body=""
	I1006 14:26:28.686435  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:28.686828  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:29.186495  649678 type.go:168] "Request Body" body=""
	I1006 14:26:29.186583  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:29.186957  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:29.686668  649678 type.go:168] "Request Body" body=""
	I1006 14:26:29.686747  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:29.687084  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:29.687155  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:30.185982  649678 type.go:168] "Request Body" body=""
	I1006 14:26:30.186084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:30.186533  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:30.686149  649678 type.go:168] "Request Body" body=""
	I1006 14:26:30.686258  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:30.686621  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:31.186197  649678 type.go:168] "Request Body" body=""
	I1006 14:26:31.186328  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:31.186681  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:31.686544  649678 type.go:168] "Request Body" body=""
	I1006 14:26:31.686625  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:31.687002  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:32.186625  649678 type.go:168] "Request Body" body=""
	I1006 14:26:32.186728  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:32.187110  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:32.187243  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:32.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:26:32.686849  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:32.687250  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:33.185866  649678 type.go:168] "Request Body" body=""
	I1006 14:26:33.185966  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:33.186401  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:33.685998  649678 type.go:168] "Request Body" body=""
	I1006 14:26:33.686076  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:33.686491  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:34.186036  649678 type.go:168] "Request Body" body=""
	I1006 14:26:34.186137  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:34.186537  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:34.686069  649678 type.go:168] "Request Body" body=""
	I1006 14:26:34.686144  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:34.686500  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:34.686564  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:35.186170  649678 type.go:168] "Request Body" body=""
	I1006 14:26:35.186296  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:35.186675  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:35.686291  649678 type.go:168] "Request Body" body=""
	I1006 14:26:35.686375  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:35.686758  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:36.186396  649678 type.go:168] "Request Body" body=""
	I1006 14:26:36.186499  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:36.186883  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:36.686651  649678 type.go:168] "Request Body" body=""
	I1006 14:26:36.686732  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:36.687079  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:36.687145  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:37.186756  649678 type.go:168] "Request Body" body=""
	I1006 14:26:37.186868  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:37.187300  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:37.685900  649678 type.go:168] "Request Body" body=""
	I1006 14:26:37.686015  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:37.686475  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:38.186110  649678 type.go:168] "Request Body" body=""
	I1006 14:26:38.186226  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:38.186598  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:38.686176  649678 type.go:168] "Request Body" body=""
	I1006 14:26:38.686303  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:38.686658  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:39.186240  649678 type.go:168] "Request Body" body=""
	I1006 14:26:39.186320  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:39.186682  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:39.186749  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:39.686298  649678 type.go:168] "Request Body" body=""
	I1006 14:26:39.686387  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:39.686746  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:40.186587  649678 type.go:168] "Request Body" body=""
	I1006 14:26:40.186667  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:40.187038  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:40.686696  649678 type.go:168] "Request Body" body=""
	I1006 14:26:40.686801  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:40.687169  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:41.186829  649678 type.go:168] "Request Body" body=""
	I1006 14:26:41.186908  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:41.187312  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:41.187383  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:41.686029  649678 type.go:168] "Request Body" body=""
	I1006 14:26:41.686108  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:41.686522  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:42.186071  649678 type.go:168] "Request Body" body=""
	I1006 14:26:42.186168  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:42.186549  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:42.686104  649678 type.go:168] "Request Body" body=""
	I1006 14:26:42.686190  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:42.686575  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:43.186140  649678 type.go:168] "Request Body" body=""
	I1006 14:26:43.186255  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:43.186605  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:43.686244  649678 type.go:168] "Request Body" body=""
	I1006 14:26:43.686321  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:43.686657  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:43.686731  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:44.186303  649678 type.go:168] "Request Body" body=""
	I1006 14:26:44.186390  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:44.186758  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:44.686323  649678 type.go:168] "Request Body" body=""
	I1006 14:26:44.686402  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:44.686737  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:45.186332  649678 type.go:168] "Request Body" body=""
	I1006 14:26:45.186410  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:45.186776  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:45.686331  649678 type.go:168] "Request Body" body=""
	I1006 14:26:45.686415  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:45.686779  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:45.686856  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:46.186339  649678 type.go:168] "Request Body" body=""
	I1006 14:26:46.186430  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:46.186785  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:46.686621  649678 type.go:168] "Request Body" body=""
	I1006 14:26:46.686715  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:46.687061  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:47.186713  649678 type.go:168] "Request Body" body=""
	I1006 14:26:47.186815  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:47.187185  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:47.686868  649678 type.go:168] "Request Body" body=""
	I1006 14:26:47.686957  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:47.687305  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:47.687372  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:48.185956  649678 type.go:168] "Request Body" body=""
	I1006 14:26:48.186058  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:48.186446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:48.686113  649678 type.go:168] "Request Body" body=""
	I1006 14:26:48.686236  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:48.686589  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:49.186156  649678 type.go:168] "Request Body" body=""
	I1006 14:26:49.186290  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:49.186679  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:49.686186  649678 type.go:168] "Request Body" body=""
	I1006 14:26:49.686282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:49.686588  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:50.186404  649678 type.go:168] "Request Body" body=""
	I1006 14:26:50.186506  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:50.186917  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:50.186990  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:50.686607  649678 type.go:168] "Request Body" body=""
	I1006 14:26:50.686695  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:50.687128  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:51.186788  649678 type.go:168] "Request Body" body=""
	I1006 14:26:51.186968  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:51.187381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:51.686169  649678 type.go:168] "Request Body" body=""
	I1006 14:26:51.686282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:51.686666  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:52.186376  649678 type.go:168] "Request Body" body=""
	I1006 14:26:52.186493  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:52.186854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:52.686550  649678 type.go:168] "Request Body" body=""
	I1006 14:26:52.686631  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:52.686915  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:52.686968  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:53.186633  649678 type.go:168] "Request Body" body=""
	I1006 14:26:53.186732  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:53.187095  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:53.686774  649678 type.go:168] "Request Body" body=""
	I1006 14:26:53.686871  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:53.687310  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:54.185884  649678 type.go:168] "Request Body" body=""
	I1006 14:26:54.185972  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:54.186391  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:54.685933  649678 type.go:168] "Request Body" body=""
	I1006 14:26:54.686006  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:54.686391  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:55.186064  649678 type.go:168] "Request Body" body=""
	I1006 14:26:55.186180  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:55.186574  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:55.186642  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:55.686159  649678 type.go:168] "Request Body" body=""
	I1006 14:26:55.686263  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:55.686668  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:56.186304  649678 type.go:168] "Request Body" body=""
	I1006 14:26:56.186418  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:56.186815  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:56.686705  649678 type.go:168] "Request Body" body=""
	I1006 14:26:56.686789  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:56.687169  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:57.186778  649678 type.go:168] "Request Body" body=""
	I1006 14:26:57.186869  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:57.187240  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:57.187304  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:57.685924  649678 type.go:168] "Request Body" body=""
	I1006 14:26:57.686000  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:57.686362  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:58.185951  649678 type.go:168] "Request Body" body=""
	I1006 14:26:58.186045  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:58.186445  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:58.685995  649678 type.go:168] "Request Body" body=""
	I1006 14:26:58.686071  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:58.686437  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:59.186003  649678 type.go:168] "Request Body" body=""
	I1006 14:26:59.186190  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:59.186571  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:59.686153  649678 type.go:168] "Request Body" body=""
	I1006 14:26:59.686257  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:59.686662  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:59.686725  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:00.186605  649678 type.go:168] "Request Body" body=""
	I1006 14:27:00.186714  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:00.187091  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:00.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:27:00.686859  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:00.687243  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:01.186928  649678 type.go:168] "Request Body" body=""
	I1006 14:27:01.187012  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:01.187398  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:01.686308  649678 type.go:168] "Request Body" body=""
	I1006 14:27:01.686391  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:01.686761  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:01.686839  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:02.186358  649678 type.go:168] "Request Body" body=""
	I1006 14:27:02.186439  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:02.186809  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:02.686423  649678 type.go:168] "Request Body" body=""
	I1006 14:27:02.686509  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:02.686907  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:03.186590  649678 type.go:168] "Request Body" body=""
	I1006 14:27:03.186676  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:03.187035  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:03.686678  649678 type.go:168] "Request Body" body=""
	I1006 14:27:03.686764  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:03.687130  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:03.687245  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:04.186807  649678 type.go:168] "Request Body" body=""
	I1006 14:27:04.186891  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:04.187266  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:04.686913  649678 type.go:168] "Request Body" body=""
	I1006 14:27:04.686987  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:04.687327  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:05.185951  649678 type.go:168] "Request Body" body=""
	I1006 14:27:05.186036  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:05.186442  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:05.685992  649678 type.go:168] "Request Body" body=""
	I1006 14:27:05.686068  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:05.686436  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:06.186013  649678 type.go:168] "Request Body" body=""
	I1006 14:27:06.186094  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:06.186496  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:06.186569  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:06.686265  649678 type.go:168] "Request Body" body=""
	I1006 14:27:06.686367  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:06.686740  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:07.186336  649678 type.go:168] "Request Body" body=""
	I1006 14:27:07.186417  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:07.186760  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:07.686331  649678 type.go:168] "Request Body" body=""
	I1006 14:27:07.686437  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:07.686806  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:08.186436  649678 type.go:168] "Request Body" body=""
	I1006 14:27:08.186520  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:08.186903  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:08.186969  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:08.686610  649678 type.go:168] "Request Body" body=""
	I1006 14:27:08.686699  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:08.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:09.186699  649678 type.go:168] "Request Body" body=""
	I1006 14:27:09.186792  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:09.187140  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:09.686782  649678 type.go:168] "Request Body" body=""
	I1006 14:27:09.686873  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:09.687256  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:10.185990  649678 type.go:168] "Request Body" body=""
	I1006 14:27:10.186073  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:10.186441  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:10.686081  649678 type.go:168] "Request Body" body=""
	I1006 14:27:10.686241  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:10.686611  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:10.686681  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:11.186246  649678 type.go:168] "Request Body" body=""
	I1006 14:27:11.186326  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:11.186676  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:11.686547  649678 type.go:168] "Request Body" body=""
	I1006 14:27:11.686634  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:11.686982  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:12.186629  649678 type.go:168] "Request Body" body=""
	I1006 14:27:12.186708  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:12.187095  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:12.686714  649678 type.go:168] "Request Body" body=""
	I1006 14:27:12.686808  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:12.687182  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:12.687301  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:13.186802  649678 type.go:168] "Request Body" body=""
	I1006 14:27:13.186882  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:13.187293  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:13.686883  649678 type.go:168] "Request Body" body=""
	I1006 14:27:13.686963  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:13.687307  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:14.185879  649678 type.go:168] "Request Body" body=""
	I1006 14:27:14.185967  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:14.186371  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:14.685892  649678 type.go:168] "Request Body" body=""
	I1006 14:27:14.685968  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:14.686306  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:15.185837  649678 type.go:168] "Request Body" body=""
	I1006 14:27:15.185912  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:15.186295  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:15.186372  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:15.685893  649678 type.go:168] "Request Body" body=""
	I1006 14:27:15.685969  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:15.686294  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:16.185990  649678 type.go:168] "Request Body" body=""
	I1006 14:27:16.186081  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:16.186492  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:16.686393  649678 type.go:168] "Request Body" body=""
	I1006 14:27:16.686478  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:16.686834  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:17.186384  649678 type.go:168] "Request Body" body=""
	I1006 14:27:17.186479  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:17.186834  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:17.186910  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:17.686523  649678 type.go:168] "Request Body" body=""
	I1006 14:27:17.686606  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:17.686989  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:18.186641  649678 type.go:168] "Request Body" body=""
	I1006 14:27:18.186739  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:18.187119  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:18.686755  649678 type.go:168] "Request Body" body=""
	I1006 14:27:18.686840  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:18.687189  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:19.186887  649678 type.go:168] "Request Body" body=""
	I1006 14:27:19.186975  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:19.187444  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:19.187516  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:19.686032  649678 type.go:168] "Request Body" body=""
	I1006 14:27:19.686111  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:19.686551  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:20.186447  649678 type.go:168] "Request Body" body=""
	I1006 14:27:20.186532  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:20.186905  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:20.686572  649678 type.go:168] "Request Body" body=""
	I1006 14:27:20.686660  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:20.687016  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:21.186692  649678 type.go:168] "Request Body" body=""
	I1006 14:27:21.186778  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:21.187150  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:21.685991  649678 type.go:168] "Request Body" body=""
	I1006 14:27:21.686073  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:21.686471  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:21.686536  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:22.186060  649678 type.go:168] "Request Body" body=""
	I1006 14:27:22.186159  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:22.186562  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:22.686161  649678 type.go:168] "Request Body" body=""
	I1006 14:27:22.686270  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:22.686631  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:23.186276  649678 type.go:168] "Request Body" body=""
	I1006 14:27:23.186365  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:23.186747  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:23.686349  649678 type.go:168] "Request Body" body=""
	I1006 14:27:23.686435  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:23.686810  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:23.686876  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:24.186408  649678 type.go:168] "Request Body" body=""
	I1006 14:27:24.186497  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:24.186870  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:24.686536  649678 type.go:168] "Request Body" body=""
	I1006 14:27:24.686611  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:24.686963  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:25.186632  649678 type.go:168] "Request Body" body=""
	I1006 14:27:25.186708  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:25.187049  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:25.686802  649678 type.go:168] "Request Body" body=""
	I1006 14:27:25.686882  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:25.687264  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:25.687322  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:26.185898  649678 type.go:168] "Request Body" body=""
	I1006 14:27:26.185976  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:26.186375  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:26.686124  649678 type.go:168] "Request Body" body=""
	I1006 14:27:26.686235  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:26.686552  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:27.186223  649678 type.go:168] "Request Body" body=""
	I1006 14:27:27.186300  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:27.186673  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:27.686275  649678 type.go:168] "Request Body" body=""
	I1006 14:27:27.686364  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:27.686719  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:28.186345  649678 type.go:168] "Request Body" body=""
	I1006 14:27:28.186434  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:28.186796  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:28.186861  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:28.686407  649678 type.go:168] "Request Body" body=""
	I1006 14:27:28.686495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:28.686858  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:29.186569  649678 type.go:168] "Request Body" body=""
	I1006 14:27:29.186651  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:29.187026  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:29.686656  649678 type.go:168] "Request Body" body=""
	I1006 14:27:29.686728  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:29.687080  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:30.185993  649678 type.go:168] "Request Body" body=""
	I1006 14:27:30.186084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:30.186484  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:30.686077  649678 type.go:168] "Request Body" body=""
	I1006 14:27:30.686155  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:30.686554  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:30.686627  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:31.186175  649678 type.go:168] "Request Body" body=""
	I1006 14:27:31.186286  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:31.186680  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:31.686528  649678 type.go:168] "Request Body" body=""
	I1006 14:27:31.686627  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:31.687001  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:32.186675  649678 type.go:168] "Request Body" body=""
	I1006 14:27:32.186758  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:32.187124  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:32.686856  649678 type.go:168] "Request Body" body=""
	I1006 14:27:32.686942  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:32.687307  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:32.687374  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:33.185899  649678 type.go:168] "Request Body" body=""
	I1006 14:27:33.185977  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:33.186402  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:33.685994  649678 type.go:168] "Request Body" body=""
	I1006 14:27:33.686074  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:33.686482  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:34.186077  649678 type.go:168] "Request Body" body=""
	I1006 14:27:34.186156  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:34.186558  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:34.686141  649678 type.go:168] "Request Body" body=""
	I1006 14:27:34.686238  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:34.686596  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:35.186192  649678 type.go:168] "Request Body" body=""
	I1006 14:27:35.186297  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:35.186668  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:35.186738  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:35.686376  649678 type.go:168] "Request Body" body=""
	I1006 14:27:35.686471  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:35.686827  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:36.186471  649678 type.go:168] "Request Body" body=""
	I1006 14:27:36.186549  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:36.186909  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:36.686773  649678 type.go:168] "Request Body" body=""
	I1006 14:27:36.686851  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:36.687225  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:37.186866  649678 type.go:168] "Request Body" body=""
	I1006 14:27:37.186943  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:37.187324  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:37.187402  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:37.685875  649678 type.go:168] "Request Body" body=""
	I1006 14:27:37.685951  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:37.686318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:38.185935  649678 type.go:168] "Request Body" body=""
	I1006 14:27:38.186022  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:38.186413  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:38.685990  649678 type.go:168] "Request Body" body=""
	I1006 14:27:38.686065  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:38.686446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:39.186040  649678 type.go:168] "Request Body" body=""
	I1006 14:27:39.186119  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:39.186517  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:39.686067  649678 type.go:168] "Request Body" body=""
	I1006 14:27:39.686152  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:39.686509  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:39.686570  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:40.186335  649678 type.go:168] "Request Body" body=""
	I1006 14:27:40.186421  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:40.186798  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:40.686383  649678 type.go:168] "Request Body" body=""
	I1006 14:27:40.686477  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:40.686843  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:41.186496  649678 type.go:168] "Request Body" body=""
	I1006 14:27:41.186589  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:41.186955  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:41.686485  649678 type.go:168] "Request Body" body=""
	I1006 14:27:41.686563  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:41.686938  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:41.687005  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:42.186439  649678 type.go:168] "Request Body" body=""
	I1006 14:27:42.186523  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:42.186890  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:42.686663  649678 type.go:168] "Request Body" body=""
	I1006 14:27:42.686739  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:42.687098  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:43.186774  649678 type.go:168] "Request Body" body=""
	I1006 14:27:43.186856  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:43.187251  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:43.686855  649678 type.go:168] "Request Body" body=""
	I1006 14:27:43.686937  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:43.687333  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:43.687401  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:44.185915  649678 type.go:168] "Request Body" body=""
	I1006 14:27:44.185993  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:44.186423  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:44.685989  649678 type.go:168] "Request Body" body=""
	I1006 14:27:44.686091  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:44.686498  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:45.186085  649678 type.go:168] "Request Body" body=""
	I1006 14:27:45.186165  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:45.186565  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:45.686116  649678 type.go:168] "Request Body" body=""
	I1006 14:27:45.686239  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:45.686593  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:46.186172  649678 type.go:168] "Request Body" body=""
	I1006 14:27:46.186282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:46.186664  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:46.186734  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:46.686523  649678 type.go:168] "Request Body" body=""
	I1006 14:27:46.686604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:46.686968  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:47.186636  649678 type.go:168] "Request Body" body=""
	I1006 14:27:47.186712  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:47.187063  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:47.686695  649678 type.go:168] "Request Body" body=""
	I1006 14:27:47.686772  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:47.687119  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:48.186827  649678 type.go:168] "Request Body" body=""
	I1006 14:27:48.186919  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:48.187317  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:48.187383  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:48.685929  649678 type.go:168] "Request Body" body=""
	I1006 14:27:48.686009  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:48.686363  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:49.185988  649678 type.go:168] "Request Body" body=""
	I1006 14:27:49.186066  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:49.186471  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:49.686018  649678 type.go:168] "Request Body" body=""
	I1006 14:27:49.686094  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:49.686456  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:50.186006  649678 node_ready.go:38] duration metric: took 6m0.000261558s for node "functional-135520" to be "Ready" ...
	I1006 14:27:50.189087  649678 out.go:203] 
	W1006 14:27:50.190513  649678 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1006 14:27:50.190545  649678 out.go:285] * 
	W1006 14:27:50.192353  649678 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:27:50.193614  649678 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:27:46 functional-135520 crio[2950]: time="2025-10-06T14:27:46.537419135Z" level=info msg="createCtr: removing container f80a0bc34f4906badae74343ef10a13edfa6593b57364ee2ca15c1e45cb44c93" id=ace47d13-2cff-4b23-8acb-40ad278ca282 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:46 functional-135520 crio[2950]: time="2025-10-06T14:27:46.53746026Z" level=info msg="createCtr: deleting container f80a0bc34f4906badae74343ef10a13edfa6593b57364ee2ca15c1e45cb44c93 from storage" id=ace47d13-2cff-4b23-8acb-40ad278ca282 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:46 functional-135520 crio[2950]: time="2025-10-06T14:27:46.539305817Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-135520_kube-system_f24ebbe4b3fc964d32e35d345c0d3653_0" id=ace47d13-2cff-4b23-8acb-40ad278ca282 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.516327205Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=f3c3cfee-4381-4062-9878-d3a682d6b077 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.517175909Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=e1ad7734-e28a-4ef9-ac87-ff6a11a9b1fa name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.51810305Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-135520/kube-apiserver" id=a1231ed3-9295-4326-ba3f-48f5ca67863c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.518388126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.522733428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.523451313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.542007657Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a1231ed3-9295-4326-ba3f-48f5ca67863c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.543292385Z" level=info msg="createCtr: deleting container ID 0dc82131c04d9ac24c1a4973bf654cfe15f2802424cb559d5727a3e886571a9c from idIndex" id=a1231ed3-9295-4326-ba3f-48f5ca67863c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.543325041Z" level=info msg="createCtr: removing container 0dc82131c04d9ac24c1a4973bf654cfe15f2802424cb559d5727a3e886571a9c" id=a1231ed3-9295-4326-ba3f-48f5ca67863c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.543353686Z" level=info msg="createCtr: deleting container 0dc82131c04d9ac24c1a4973bf654cfe15f2802424cb559d5727a3e886571a9c from storage" id=a1231ed3-9295-4326-ba3f-48f5ca67863c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.545165252Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-135520_kube-system_64c921c0d544efd1faaa2d85c050bc13_0" id=a1231ed3-9295-4326-ba3f-48f5ca67863c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.516281237Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=e09686fa-6b36-4172-b0fd-7c3937c59ca0 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.517137159Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=3f9670e4-c9b8-4ebd-ad5b-eca380b40295 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.518045551Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-135520/kube-scheduler" id=f28264a9-ff49-4a8a-a176-67a7f8d3e48f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.518303592Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.521571675Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.521988529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.53715491Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f28264a9-ff49-4a8a-a176-67a7f8d3e48f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.538436064Z" level=info msg="createCtr: deleting container ID 53f44639142744b47b0894826d110b7fa6706512d5ce9b8100673f21c18db971 from idIndex" id=f28264a9-ff49-4a8a-a176-67a7f8d3e48f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.538465371Z" level=info msg="createCtr: removing container 53f44639142744b47b0894826d110b7fa6706512d5ce9b8100673f21c18db971" id=f28264a9-ff49-4a8a-a176-67a7f8d3e48f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.538492974Z" level=info msg="createCtr: deleting container 53f44639142744b47b0894826d110b7fa6706512d5ce9b8100673f21c18db971 from storage" id=f28264a9-ff49-4a8a-a176-67a7f8d3e48f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.54058168Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-135520_kube-system_5115bd1eba9594a3f2b99b5d6a4b9d59_0" id=f28264a9-ff49-4a8a-a176-67a7f8d3e48f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:27:51.991135    4368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:27:51.991738    4368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:27:51.993337    4368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:27:51.993848    4368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:27:51.995425    4368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:27:52 up  5:10,  0 user,  load average: 0.33, 0.36, 0.53
	Linux functional-135520 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:27:46 functional-135520 kubelet[1801]: E1006 14:27:46.516009    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:27:46 functional-135520 kubelet[1801]: E1006 14:27:46.539594    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:27:46 functional-135520 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:46 functional-135520 kubelet[1801]:  > podSandboxID="f122bf3cdcc12aa8e4b9a0e1655bceae045fdc99afe781ed4e5deffc77adf21d"
	Oct 06 14:27:46 functional-135520 kubelet[1801]: E1006 14:27:46.539677    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:27:46 functional-135520 kubelet[1801]:         container etcd start failed in pod etcd-functional-135520_kube-system(f24ebbe4b3fc964d32e35d345c0d3653): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:46 functional-135520 kubelet[1801]:  > logger="UnhandledError"
	Oct 06 14:27:46 functional-135520 kubelet[1801]: E1006 14:27:46.539706    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-135520" podUID="f24ebbe4b3fc964d32e35d345c0d3653"
	Oct 06 14:27:47 functional-135520 kubelet[1801]: E1006 14:27:47.515820    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:27:47 functional-135520 kubelet[1801]: E1006 14:27:47.545460    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:27:47 functional-135520 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:47 functional-135520 kubelet[1801]:  > podSandboxID="c8563dd0b37e233739b3c3a382aa7aa99838d00dddfb4c17bcee8072fc8b2e15"
	Oct 06 14:27:47 functional-135520 kubelet[1801]: E1006 14:27:47.545569    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:27:47 functional-135520 kubelet[1801]:         container kube-apiserver start failed in pod kube-apiserver-functional-135520_kube-system(64c921c0d544efd1faaa2d85c050bc13): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:47 functional-135520 kubelet[1801]:  > logger="UnhandledError"
	Oct 06 14:27:47 functional-135520 kubelet[1801]: E1006 14:27:47.545614    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-135520" podUID="64c921c0d544efd1faaa2d85c050bc13"
	Oct 06 14:27:48 functional-135520 kubelet[1801]: E1006 14:27:48.515740    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:27:48 functional-135520 kubelet[1801]: E1006 14:27:48.540814    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:27:48 functional-135520 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:48 functional-135520 kubelet[1801]:  > podSandboxID="a92786c5eb4654629f78c624cdcfef7af25c891888e7f9c4c81b2755c377da1a"
	Oct 06 14:27:48 functional-135520 kubelet[1801]: E1006 14:27:48.540922    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:27:48 functional-135520 kubelet[1801]:         container kube-scheduler start failed in pod kube-scheduler-functional-135520_kube-system(5115bd1eba9594a3f2b99b5d6a4b9d59): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:48 functional-135520 kubelet[1801]:  > logger="UnhandledError"
	Oct 06 14:27:48 functional-135520 kubelet[1801]: E1006 14:27:48.540950    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-135520" podUID="5115bd1eba9594a3f2b99b5d6a4b9d59"
	Oct 06 14:27:50 functional-135520 kubelet[1801]: E1006 14:27:50.834294    1801 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-135520.186beca30fea008b\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-135520.186beca30fea008b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-135520,UID:functional-135520,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-135520 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-135520,},FirstTimestamp:2025-10-06 14:17:44.509128843 +0000 UTC m=+0.464938753,LastTimestamp:2025-10-06 14:17:44.510554344 +0000 UTC m=+0.466364247,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-135520,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520: exit status 2 (314.71963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-135520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (366.56s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-135520 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-135520 get po -A: exit status 1 (54.071663ms)

                                                
                                                
** stderr ** 
	E1006 14:27:52.939650  653334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:27:52.940030  653334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:27:52.941610  653334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:27:52.942483  653334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:27:52.943964  653334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-135520 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1006 14:27:52.939650  653334 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1006 14:27:52.940030  653334 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1006 14:27:52.941610  653334 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1006 14:27:52.942483  653334 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1006 14:27:52.943964  653334 memc
ache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nThe connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-135520 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-135520 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-135520
helpers_test.go:243: (dbg) docker inspect functional-135520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	        "Created": "2025-10-06T14:13:32.283355011Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:13:32.318096257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hosts",
	        "LogPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20-json.log",
	        "Name": "/functional-135520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-135520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-135520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	                "LowerDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-135520",
	                "Source": "/var/lib/docker/volumes/functional-135520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-135520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-135520",
	                "name.minikube.sigs.k8s.io": "functional-135520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6368ffca3e5840f94a34614c511d9f0a0a4ca0d05de4fe1f94c8bfdc332f1a62",
	            "SandboxKey": "/var/run/docker/netns/6368ffca3e58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-135520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d1:94:25:38:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f712be59dd18dac98bed5f234c9f77a39e85277143d6f46285adcd3b0185d552",
	                    "EndpointID": "b816964b653b1b5116e3262dfdc87af272931013ef5b9e2714c9ff7357118a6f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-135520",
	                        "3dd9a226ea42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520: exit status 2 (304.197978ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-135520 logs -n 25: (1.004939364s)
helpers_test.go:260: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-040731                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-040731   │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │ 06 Oct 25 13:56 UTC │
	│ start   │ --download-only -p download-docker-650660 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-650660 │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ delete  │ -p download-docker-650660                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-650660 │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │ 06 Oct 25 13:56 UTC │
	│ start   │ --download-only -p binary-mirror-501421 --alsologtostderr --binary-mirror http://127.0.0.1:36469 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-501421   │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ delete  │ -p binary-mirror-501421                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-501421   │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │ 06 Oct 25 13:56 UTC │
	│ addons  │ enable dashboard -p addons-834039                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-834039          │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-834039                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-834039          │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ start   │ -p addons-834039 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-834039          │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ delete  │ -p addons-834039                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-834039          │ jenkins │ v1.37.0 │ 06 Oct 25 14:04 UTC │ 06 Oct 25 14:04 UTC │
	│ start   │ -p nospam-500584 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-500584 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:04 UTC │                     │
	│ start   │ nospam-500584 --log_dir /tmp/nospam-500584 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ start   │ nospam-500584 --log_dir /tmp/nospam-500584 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ start   │ nospam-500584 --log_dir /tmp/nospam-500584 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ pause   │ nospam-500584 --log_dir /tmp/nospam-500584 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ pause   │ nospam-500584 --log_dir /tmp/nospam-500584 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ pause   │ nospam-500584 --log_dir /tmp/nospam-500584 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ delete  │ -p nospam-500584                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584          │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ start   │ -p functional-135520 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-135520      │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ start   │ -p functional-135520 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-135520      │ jenkins │ v1.37.0 │ 06 Oct 25 14:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:21:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:21:46.323016  649678 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:21:46.323271  649678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:21:46.323279  649678 out.go:374] Setting ErrFile to fd 2...
	I1006 14:21:46.323283  649678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:21:46.323475  649678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:21:46.323908  649678 out.go:368] Setting JSON to false
	I1006 14:21:46.324826  649678 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18242,"bootTime":1759742264,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:21:46.324926  649678 start.go:140] virtualization: kvm guest
	I1006 14:21:46.326925  649678 out.go:179] * [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:21:46.327942  649678 notify.go:220] Checking for updates...
	I1006 14:21:46.327965  649678 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:21:46.329155  649678 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:21:46.330229  649678 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:46.331298  649678 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:21:46.332353  649678 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:21:46.333341  649678 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:21:46.334666  649678 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:21:46.334805  649678 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:21:46.359710  649678 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:21:46.359861  649678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:21:46.415678  649678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:21:46.405264016 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:21:46.415787  649678 docker.go:318] overlay module found
	I1006 14:21:46.417155  649678 out.go:179] * Using the docker driver based on existing profile
	I1006 14:21:46.418292  649678 start.go:304] selected driver: docker
	I1006 14:21:46.418308  649678 start.go:924] validating driver "docker" against &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:46.418380  649678 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:21:46.418468  649678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:21:46.473903  649678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:21:46.464043789 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:21:46.474648  649678 cni.go:84] Creating CNI manager for ""
	I1006 14:21:46.474719  649678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:21:46.474770  649678 start.go:348] cluster config:
	{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:46.476311  649678 out.go:179] * Starting "functional-135520" primary control-plane node in "functional-135520" cluster
	I1006 14:21:46.477235  649678 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:21:46.478074  649678 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:21:46.479119  649678 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:21:46.479164  649678 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:21:46.479185  649678 cache.go:58] Caching tarball of preloaded images
	I1006 14:21:46.479228  649678 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:21:46.479294  649678 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:21:46.479309  649678 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:21:46.479413  649678 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/config.json ...
	I1006 14:21:46.499695  649678 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:21:46.499723  649678 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:21:46.499744  649678 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:21:46.499779  649678 start.go:360] acquireMachinesLock for functional-135520: {Name:mk634323c4619e77647ac9d9aaca492e399526ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:21:46.499864  649678 start.go:364] duration metric: took 47.895µs to acquireMachinesLock for "functional-135520"
	I1006 14:21:46.499886  649678 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:21:46.499892  649678 fix.go:54] fixHost starting: 
	I1006 14:21:46.500243  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:46.517601  649678 fix.go:112] recreateIfNeeded on functional-135520: state=Running err=<nil>
	W1006 14:21:46.517640  649678 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:21:46.519112  649678 out.go:252] * Updating the running docker "functional-135520" container ...
	I1006 14:21:46.519143  649678 machine.go:93] provisionDockerMachine start ...
	I1006 14:21:46.519223  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:46.537175  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:46.537424  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:46.537438  649678 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:21:46.682374  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:21:46.682420  649678 ubuntu.go:182] provisioning hostname "functional-135520"
	I1006 14:21:46.682484  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:46.700103  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:46.700382  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:46.700401  649678 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-135520 && echo "functional-135520" | sudo tee /etc/hostname
	I1006 14:21:46.853845  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:21:46.853924  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:46.872015  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:46.872265  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:46.872284  649678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-135520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-135520/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-135520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:21:47.017154  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:21:47.017184  649678 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:21:47.017239  649678 ubuntu.go:190] setting up certificates
	I1006 14:21:47.017253  649678 provision.go:84] configureAuth start
	I1006 14:21:47.017340  649678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:21:47.035104  649678 provision.go:143] copyHostCerts
	I1006 14:21:47.035140  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:21:47.035175  649678 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:21:47.035198  649678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:21:47.035336  649678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:21:47.035448  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:21:47.035468  649678 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:21:47.035478  649678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:21:47.035513  649678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:21:47.035575  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:21:47.035593  649678 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:21:47.035599  649678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:21:47.035623  649678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:21:47.035688  649678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.functional-135520 san=[127.0.0.1 192.168.49.2 functional-135520 localhost minikube]
	I1006 14:21:47.332166  649678 provision.go:177] copyRemoteCerts
	I1006 14:21:47.332258  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:21:47.332304  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.351185  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:47.453191  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:21:47.453264  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:21:47.470840  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:21:47.470907  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 14:21:47.487466  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:21:47.487518  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:21:47.504343  649678 provision.go:87] duration metric: took 487.07429ms to configureAuth
	I1006 14:21:47.504374  649678 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:21:47.504541  649678 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:21:47.504639  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.523029  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:47.523280  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:47.523307  649678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:21:47.788227  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:21:47.788259  649678 machine.go:96] duration metric: took 1.269106143s to provisionDockerMachine
	I1006 14:21:47.788275  649678 start.go:293] postStartSetup for "functional-135520" (driver="docker")
	I1006 14:21:47.788290  649678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:21:47.788372  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:21:47.788428  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.805850  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:47.908894  649678 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:21:47.912773  649678 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1006 14:21:47.912795  649678 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1006 14:21:47.912801  649678 command_runner.go:130] > VERSION_ID="12"
	I1006 14:21:47.912807  649678 command_runner.go:130] > VERSION="12 (bookworm)"
	I1006 14:21:47.912813  649678 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1006 14:21:47.912819  649678 command_runner.go:130] > ID=debian
	I1006 14:21:47.912827  649678 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1006 14:21:47.912834  649678 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1006 14:21:47.912843  649678 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1006 14:21:47.912900  649678 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:21:47.912919  649678 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:21:47.912929  649678 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:21:47.912988  649678 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:21:47.913065  649678 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:21:47.913078  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:21:47.913143  649678 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> hosts in /etc/test/nested/copy/629719
	I1006 14:21:47.913151  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> /etc/test/nested/copy/629719/hosts
	I1006 14:21:47.913182  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/629719
	I1006 14:21:47.920839  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:21:47.937786  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts --> /etc/test/nested/copy/629719/hosts (40 bytes)
	I1006 14:21:47.954760  649678 start.go:296] duration metric: took 166.455369ms for postStartSetup
	I1006 14:21:47.954834  649678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:21:47.954870  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.972368  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:48.072535  649678 command_runner.go:130] > 38%
	I1006 14:21:48.072624  649678 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:21:48.077267  649678 command_runner.go:130] > 182G
	I1006 14:21:48.077574  649678 fix.go:56] duration metric: took 1.577678011s for fixHost
	I1006 14:21:48.077595  649678 start.go:83] releasing machines lock for "functional-135520", held for 1.577717734s
	I1006 14:21:48.077675  649678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:21:48.095670  649678 ssh_runner.go:195] Run: cat /version.json
	I1006 14:21:48.095722  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:48.095754  649678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:21:48.095827  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:48.113591  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:48.115313  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:48.268773  649678 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1006 14:21:48.268839  649678 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1006 14:21:48.268953  649678 ssh_runner.go:195] Run: systemctl --version
	I1006 14:21:48.275683  649678 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1006 14:21:48.275717  649678 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1006 14:21:48.275778  649678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:21:48.311695  649678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1006 14:21:48.316662  649678 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1006 14:21:48.316719  649678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:21:48.316778  649678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:21:48.324682  649678 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:21:48.324705  649678 start.go:495] detecting cgroup driver to use...
	I1006 14:21:48.324740  649678 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:21:48.324780  649678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:21:48.339343  649678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:21:48.350971  649678 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:21:48.351020  649678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:21:48.364377  649678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:21:48.375810  649678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:21:48.466998  649678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:21:48.555437  649678 docker.go:234] disabling docker service ...
	I1006 14:21:48.555507  649678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:21:48.569642  649678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:21:48.581371  649678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:21:48.660341  649678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:21:48.745051  649678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:21:48.757689  649678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:21:48.770829  649678 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1006 14:21:48.771733  649678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:21:48.771806  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.781084  649678 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:21:48.781164  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.790125  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.798751  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.807637  649678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:21:48.815986  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.824650  649678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.832873  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.841368  649678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:21:48.847999  649678 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1006 14:21:48.848646  649678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:21:48.855735  649678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:48.941247  649678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:21:49.054732  649678 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:21:49.054813  649678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:21:49.059042  649678 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1006 14:21:49.059070  649678 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1006 14:21:49.059079  649678 command_runner.go:130] > Device: 0,59	Inode: 3845        Links: 1
	I1006 14:21:49.059086  649678 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 14:21:49.059091  649678 command_runner.go:130] > Access: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059104  649678 command_runner.go:130] > Modify: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059109  649678 command_runner.go:130] > Change: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059113  649678 command_runner.go:130] >  Birth: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059133  649678 start.go:563] Will wait 60s for crictl version
	I1006 14:21:49.059181  649678 ssh_runner.go:195] Run: which crictl
	I1006 14:21:49.062689  649678 command_runner.go:130] > /usr/local/bin/crictl
	I1006 14:21:49.062764  649678 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:21:49.086605  649678 command_runner.go:130] > Version:  0.1.0
	I1006 14:21:49.086623  649678 command_runner.go:130] > RuntimeName:  cri-o
	I1006 14:21:49.086627  649678 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1006 14:21:49.086632  649678 command_runner.go:130] > RuntimeApiVersion:  v1
	I1006 14:21:49.088423  649678 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:21:49.088499  649678 ssh_runner.go:195] Run: crio --version
	I1006 14:21:49.118625  649678 command_runner.go:130] > crio version 1.34.1
	I1006 14:21:49.118652  649678 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1006 14:21:49.118659  649678 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1006 14:21:49.118666  649678 command_runner.go:130] >    GitTreeState:   dirty
	I1006 14:21:49.118672  649678 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1006 14:21:49.118678  649678 command_runner.go:130] >    GoVersion:      go1.24.6
	I1006 14:21:49.118683  649678 command_runner.go:130] >    Compiler:       gc
	I1006 14:21:49.118692  649678 command_runner.go:130] >    Platform:       linux/amd64
	I1006 14:21:49.118700  649678 command_runner.go:130] >    Linkmode:       static
	I1006 14:21:49.118708  649678 command_runner.go:130] >    BuildTags:
	I1006 14:21:49.118718  649678 command_runner.go:130] >      static
	I1006 14:21:49.118724  649678 command_runner.go:130] >      netgo
	I1006 14:21:49.118729  649678 command_runner.go:130] >      osusergo
	I1006 14:21:49.118739  649678 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1006 14:21:49.118745  649678 command_runner.go:130] >      seccomp
	I1006 14:21:49.118749  649678 command_runner.go:130] >      apparmor
	I1006 14:21:49.118753  649678 command_runner.go:130] >      selinux
	I1006 14:21:49.118757  649678 command_runner.go:130] >    LDFlags:          unknown
	I1006 14:21:49.118781  649678 command_runner.go:130] >    SeccompEnabled:   true
	I1006 14:21:49.118789  649678 command_runner.go:130] >    AppArmorEnabled:  false
	I1006 14:21:49.118869  649678 ssh_runner.go:195] Run: crio --version
	I1006 14:21:49.147173  649678 command_runner.go:130] > crio version 1.34.1
	I1006 14:21:49.147230  649678 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1006 14:21:49.147241  649678 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1006 14:21:49.147249  649678 command_runner.go:130] >    GitTreeState:   dirty
	I1006 14:21:49.147257  649678 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1006 14:21:49.147263  649678 command_runner.go:130] >    GoVersion:      go1.24.6
	I1006 14:21:49.147267  649678 command_runner.go:130] >    Compiler:       gc
	I1006 14:21:49.147283  649678 command_runner.go:130] >    Platform:       linux/amd64
	I1006 14:21:49.147292  649678 command_runner.go:130] >    Linkmode:       static
	I1006 14:21:49.147296  649678 command_runner.go:130] >    BuildTags:
	I1006 14:21:49.147299  649678 command_runner.go:130] >      static
	I1006 14:21:49.147303  649678 command_runner.go:130] >      netgo
	I1006 14:21:49.147309  649678 command_runner.go:130] >      osusergo
	I1006 14:21:49.147313  649678 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1006 14:21:49.147320  649678 command_runner.go:130] >      seccomp
	I1006 14:21:49.147324  649678 command_runner.go:130] >      apparmor
	I1006 14:21:49.147330  649678 command_runner.go:130] >      selinux
	I1006 14:21:49.147334  649678 command_runner.go:130] >    LDFlags:          unknown
	I1006 14:21:49.147340  649678 command_runner.go:130] >    SeccompEnabled:   true
	I1006 14:21:49.147443  649678 command_runner.go:130] >    AppArmorEnabled:  false
	I1006 14:21:49.149760  649678 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:21:49.150923  649678 cli_runner.go:164] Run: docker network inspect functional-135520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:21:49.168305  649678 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:21:49.172524  649678 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1006 14:21:49.172624  649678 kubeadm.go:883] updating cluster {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:21:49.172735  649678 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:21:49.172777  649678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:21:49.203555  649678 command_runner.go:130] > {
	I1006 14:21:49.203573  649678 command_runner.go:130] >   "images":  [
	I1006 14:21:49.203577  649678 command_runner.go:130] >     {
	I1006 14:21:49.203585  649678 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1006 14:21:49.203589  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203596  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1006 14:21:49.203599  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203603  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203613  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1006 14:21:49.203619  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1006 14:21:49.203623  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203628  649678 command_runner.go:130] >       "size":  "109379124",
	I1006 14:21:49.203634  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203641  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203647  649678 command_runner.go:130] >     },
	I1006 14:21:49.203650  649678 command_runner.go:130] >     {
	I1006 14:21:49.203656  649678 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1006 14:21:49.203660  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203665  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1006 14:21:49.203671  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203676  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203684  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1006 14:21:49.203694  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1006 14:21:49.203697  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203701  649678 command_runner.go:130] >       "size":  "31470524",
	I1006 14:21:49.203705  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203716  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203722  649678 command_runner.go:130] >     },
	I1006 14:21:49.203725  649678 command_runner.go:130] >     {
	I1006 14:21:49.203731  649678 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1006 14:21:49.203737  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203742  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1006 14:21:49.203748  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203752  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203759  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1006 14:21:49.203768  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1006 14:21:49.203771  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203775  649678 command_runner.go:130] >       "size":  "76103547",
	I1006 14:21:49.203779  649678 command_runner.go:130] >       "username":  "nonroot",
	I1006 14:21:49.203783  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203785  649678 command_runner.go:130] >     },
	I1006 14:21:49.203789  649678 command_runner.go:130] >     {
	I1006 14:21:49.203794  649678 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1006 14:21:49.203799  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203804  649678 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1006 14:21:49.203807  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203811  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203817  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1006 14:21:49.203826  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1006 14:21:49.203829  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203836  649678 command_runner.go:130] >       "size":  "195976448",
	I1006 14:21:49.203840  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.203844  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.203847  649678 command_runner.go:130] >       },
	I1006 14:21:49.203855  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203861  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203864  649678 command_runner.go:130] >     },
	I1006 14:21:49.203867  649678 command_runner.go:130] >     {
	I1006 14:21:49.203873  649678 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1006 14:21:49.203879  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203884  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1006 14:21:49.203887  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203891  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203901  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1006 14:21:49.203907  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1006 14:21:49.203913  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203916  649678 command_runner.go:130] >       "size":  "89046001",
	I1006 14:21:49.203920  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.203925  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.203928  649678 command_runner.go:130] >       },
	I1006 14:21:49.203931  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203935  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203938  649678 command_runner.go:130] >     },
	I1006 14:21:49.203941  649678 command_runner.go:130] >     {
	I1006 14:21:49.203947  649678 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1006 14:21:49.203953  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203958  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1006 14:21:49.203961  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203965  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203972  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1006 14:21:49.203981  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1006 14:21:49.203984  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203988  649678 command_runner.go:130] >       "size":  "76004181",
	I1006 14:21:49.203992  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.203998  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.204001  649678 command_runner.go:130] >       },
	I1006 14:21:49.204005  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204011  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.204014  649678 command_runner.go:130] >     },
	I1006 14:21:49.204019  649678 command_runner.go:130] >     {
	I1006 14:21:49.204024  649678 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1006 14:21:49.204028  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.204033  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1006 14:21:49.204036  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204042  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.204055  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1006 14:21:49.204067  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1006 14:21:49.204073  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204078  649678 command_runner.go:130] >       "size":  "73138073",
	I1006 14:21:49.204081  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204085  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.204089  649678 command_runner.go:130] >     },
	I1006 14:21:49.204092  649678 command_runner.go:130] >     {
	I1006 14:21:49.204097  649678 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1006 14:21:49.204104  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.204108  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1006 14:21:49.204112  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204116  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.204123  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1006 14:21:49.204153  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1006 14:21:49.204160  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204164  649678 command_runner.go:130] >       "size":  "53844823",
	I1006 14:21:49.204167  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.204170  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.204174  649678 command_runner.go:130] >       },
	I1006 14:21:49.204178  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204183  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.204188  649678 command_runner.go:130] >     },
	I1006 14:21:49.204191  649678 command_runner.go:130] >     {
	I1006 14:21:49.204197  649678 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1006 14:21:49.204222  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.204230  649678 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1006 14:21:49.204237  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204243  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.204253  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1006 14:21:49.204260  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1006 14:21:49.204266  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204269  649678 command_runner.go:130] >       "size":  "742092",
	I1006 14:21:49.204273  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.204277  649678 command_runner.go:130] >         "value":  "65535"
	I1006 14:21:49.204280  649678 command_runner.go:130] >       },
	I1006 14:21:49.204284  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204288  649678 command_runner.go:130] >       "pinned":  true
	I1006 14:21:49.204291  649678 command_runner.go:130] >     }
	I1006 14:21:49.204294  649678 command_runner.go:130] >   ]
	I1006 14:21:49.204299  649678 command_runner.go:130] > }
	I1006 14:21:49.205550  649678 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:21:49.205570  649678 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:21:49.205618  649678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:21:49.229611  649678 command_runner.go:130] > {
	I1006 14:21:49.229630  649678 command_runner.go:130] >   "images":  [
	I1006 14:21:49.229637  649678 command_runner.go:130] >     {
	I1006 14:21:49.229647  649678 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1006 14:21:49.229656  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.229664  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1006 14:21:49.229669  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229675  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.229690  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1006 14:21:49.229706  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1006 14:21:49.229712  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229738  649678 command_runner.go:130] >       "size":  "109379124",
	I1006 14:21:49.229748  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.229755  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.229761  649678 command_runner.go:130] >     },
	I1006 14:21:49.229770  649678 command_runner.go:130] >     {
	I1006 14:21:49.229780  649678 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1006 14:21:49.229789  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.229799  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1006 14:21:49.229807  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229814  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.229830  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1006 14:21:49.229846  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1006 14:21:49.229854  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229863  649678 command_runner.go:130] >       "size":  "31470524",
	I1006 14:21:49.229872  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.229894  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.229902  649678 command_runner.go:130] >     },
	I1006 14:21:49.229907  649678 command_runner.go:130] >     {
	I1006 14:21:49.229918  649678 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1006 14:21:49.229927  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.229936  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1006 14:21:49.229943  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229951  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.229965  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1006 14:21:49.229980  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1006 14:21:49.229999  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230007  649678 command_runner.go:130] >       "size":  "76103547",
	I1006 14:21:49.230016  649678 command_runner.go:130] >       "username":  "nonroot",
	I1006 14:21:49.230023  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230031  649678 command_runner.go:130] >     },
	I1006 14:21:49.230036  649678 command_runner.go:130] >     {
	I1006 14:21:49.230050  649678 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1006 14:21:49.230059  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230068  649678 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1006 14:21:49.230076  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230083  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230097  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1006 14:21:49.230112  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1006 14:21:49.230119  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230127  649678 command_runner.go:130] >       "size":  "195976448",
	I1006 14:21:49.230135  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230143  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230152  649678 command_runner.go:130] >       },
	I1006 14:21:49.230165  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230175  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230181  649678 command_runner.go:130] >     },
	I1006 14:21:49.230189  649678 command_runner.go:130] >     {
	I1006 14:21:49.230220  649678 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1006 14:21:49.230239  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230249  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1006 14:21:49.230257  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230264  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230279  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1006 14:21:49.230306  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1006 14:21:49.230314  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230321  649678 command_runner.go:130] >       "size":  "89046001",
	I1006 14:21:49.230329  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230336  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230345  649678 command_runner.go:130] >       },
	I1006 14:21:49.230352  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230361  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230367  649678 command_runner.go:130] >     },
	I1006 14:21:49.230375  649678 command_runner.go:130] >     {
	I1006 14:21:49.230386  649678 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1006 14:21:49.230395  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230406  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1006 14:21:49.230414  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230421  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230436  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1006 14:21:49.230451  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1006 14:21:49.230460  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230467  649678 command_runner.go:130] >       "size":  "76004181",
	I1006 14:21:49.230484  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230493  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230500  649678 command_runner.go:130] >       },
	I1006 14:21:49.230507  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230516  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230523  649678 command_runner.go:130] >     },
	I1006 14:21:49.230529  649678 command_runner.go:130] >     {
	I1006 14:21:49.230542  649678 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1006 14:21:49.230549  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230568  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1006 14:21:49.230576  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230583  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230599  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1006 14:21:49.230614  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1006 14:21:49.230621  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230628  649678 command_runner.go:130] >       "size":  "73138073",
	I1006 14:21:49.230637  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230645  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230653  649678 command_runner.go:130] >     },
	I1006 14:21:49.230658  649678 command_runner.go:130] >     {
	I1006 14:21:49.230665  649678 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1006 14:21:49.230670  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230679  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1006 14:21:49.230687  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230693  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230706  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1006 14:21:49.230734  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1006 14:21:49.230745  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230751  649678 command_runner.go:130] >       "size":  "53844823",
	I1006 14:21:49.230758  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230767  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230773  649678 command_runner.go:130] >       },
	I1006 14:21:49.230783  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230791  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230799  649678 command_runner.go:130] >     },
	I1006 14:21:49.230805  649678 command_runner.go:130] >     {
	I1006 14:21:49.230819  649678 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1006 14:21:49.230828  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230837  649678 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1006 14:21:49.230845  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230852  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230865  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1006 14:21:49.230878  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1006 14:21:49.230887  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230894  649678 command_runner.go:130] >       "size":  "742092",
	I1006 14:21:49.230902  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230909  649678 command_runner.go:130] >         "value":  "65535"
	I1006 14:21:49.230918  649678 command_runner.go:130] >       },
	I1006 14:21:49.230924  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230934  649678 command_runner.go:130] >       "pinned":  true
	I1006 14:21:49.230940  649678 command_runner.go:130] >     }
	I1006 14:21:49.230948  649678 command_runner.go:130] >   ]
	I1006 14:21:49.230953  649678 command_runner.go:130] > }
	I1006 14:21:49.231845  649678 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:21:49.231866  649678 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:21:49.231873  649678 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1006 14:21:49.232021  649678 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-135520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:21:49.232106  649678 ssh_runner.go:195] Run: crio config
	I1006 14:21:49.273258  649678 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1006 14:21:49.273298  649678 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1006 14:21:49.273306  649678 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1006 14:21:49.273309  649678 command_runner.go:130] > #
	I1006 14:21:49.273321  649678 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1006 14:21:49.273332  649678 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1006 14:21:49.273343  649678 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1006 14:21:49.273357  649678 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1006 14:21:49.273367  649678 command_runner.go:130] > # reload'.
	I1006 14:21:49.273377  649678 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1006 14:21:49.273389  649678 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1006 14:21:49.273403  649678 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1006 14:21:49.273413  649678 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1006 14:21:49.273423  649678 command_runner.go:130] > [crio]
	I1006 14:21:49.273433  649678 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1006 14:21:49.273446  649678 command_runner.go:130] > # containers images, in this directory.
	I1006 14:21:49.273471  649678 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1006 14:21:49.273486  649678 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1006 14:21:49.273494  649678 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1006 14:21:49.273512  649678 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1006 14:21:49.273525  649678 command_runner.go:130] > # imagestore = ""
	I1006 14:21:49.273535  649678 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1006 14:21:49.273548  649678 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1006 14:21:49.273561  649678 command_runner.go:130] > # storage_driver = "overlay"
	I1006 14:21:49.273574  649678 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1006 14:21:49.273591  649678 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1006 14:21:49.273599  649678 command_runner.go:130] > # storage_option = [
	I1006 14:21:49.273613  649678 command_runner.go:130] > # ]
	I1006 14:21:49.273623  649678 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1006 14:21:49.273635  649678 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1006 14:21:49.273642  649678 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1006 14:21:49.273652  649678 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1006 14:21:49.273664  649678 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1006 14:21:49.273678  649678 command_runner.go:130] > # always happen on a node reboot
	I1006 14:21:49.273690  649678 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1006 14:21:49.273712  649678 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1006 14:21:49.273725  649678 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1006 14:21:49.273743  649678 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1006 14:21:49.273751  649678 command_runner.go:130] > # version_file_persist = ""
	I1006 14:21:49.273764  649678 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1006 14:21:49.273781  649678 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1006 14:21:49.273792  649678 command_runner.go:130] > # internal_wipe = true
	I1006 14:21:49.273806  649678 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1006 14:21:49.273819  649678 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1006 14:21:49.273829  649678 command_runner.go:130] > # internal_repair = true
	I1006 14:21:49.273842  649678 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1006 14:21:49.273856  649678 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1006 14:21:49.273870  649678 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1006 14:21:49.273880  649678 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1006 14:21:49.273894  649678 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1006 14:21:49.273901  649678 command_runner.go:130] > [crio.api]
	I1006 14:21:49.273915  649678 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1006 14:21:49.273926  649678 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1006 14:21:49.273935  649678 command_runner.go:130] > # IP address on which the stream server will listen.
	I1006 14:21:49.273947  649678 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1006 14:21:49.273963  649678 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1006 14:21:49.273975  649678 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1006 14:21:49.273987  649678 command_runner.go:130] > # stream_port = "0"
	I1006 14:21:49.274002  649678 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1006 14:21:49.274013  649678 command_runner.go:130] > # stream_enable_tls = false
	I1006 14:21:49.274023  649678 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1006 14:21:49.274035  649678 command_runner.go:130] > # stream_idle_timeout = ""
	I1006 14:21:49.274045  649678 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1006 14:21:49.274059  649678 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1006 14:21:49.274068  649678 command_runner.go:130] > # stream_tls_cert = ""
	I1006 14:21:49.274083  649678 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1006 14:21:49.274109  649678 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1006 14:21:49.274132  649678 command_runner.go:130] > # stream_tls_key = ""
	I1006 14:21:49.274143  649678 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1006 14:21:49.274153  649678 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1006 14:21:49.274162  649678 command_runner.go:130] > # automatically pick up the changes.
	I1006 14:21:49.274173  649678 command_runner.go:130] > # stream_tls_ca = ""
	I1006 14:21:49.274218  649678 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1006 14:21:49.274233  649678 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1006 14:21:49.274245  649678 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1006 14:21:49.274257  649678 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1006 14:21:49.274268  649678 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1006 14:21:49.274281  649678 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1006 14:21:49.274293  649678 command_runner.go:130] > [crio.runtime]
	I1006 14:21:49.274303  649678 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1006 14:21:49.274315  649678 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1006 14:21:49.274325  649678 command_runner.go:130] > # "nofile=1024:2048"
	I1006 14:21:49.274336  649678 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1006 14:21:49.274347  649678 command_runner.go:130] > # default_ulimits = [
	I1006 14:21:49.274353  649678 command_runner.go:130] > # ]
	I1006 14:21:49.274363  649678 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1006 14:21:49.274374  649678 command_runner.go:130] > # no_pivot = false
	I1006 14:21:49.274384  649678 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1006 14:21:49.274399  649678 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1006 14:21:49.274410  649678 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1006 14:21:49.274425  649678 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1006 14:21:49.274437  649678 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1006 14:21:49.274453  649678 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1006 14:21:49.274464  649678 command_runner.go:130] > # conmon = ""
	I1006 14:21:49.274473  649678 command_runner.go:130] > # Cgroup setting for conmon
	I1006 14:21:49.274487  649678 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1006 14:21:49.274498  649678 command_runner.go:130] > conmon_cgroup = "pod"
	I1006 14:21:49.274508  649678 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1006 14:21:49.274520  649678 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1006 14:21:49.274533  649678 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1006 14:21:49.274545  649678 command_runner.go:130] > # conmon_env = [
	I1006 14:21:49.274559  649678 command_runner.go:130] > # ]
	I1006 14:21:49.274566  649678 command_runner.go:130] > # Additional environment variables to set for all the
	I1006 14:21:49.274574  649678 command_runner.go:130] > # containers. These are overridden if set in the
	I1006 14:21:49.274583  649678 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1006 14:21:49.274593  649678 command_runner.go:130] > # default_env = [
	I1006 14:21:49.274599  649678 command_runner.go:130] > # ]
	I1006 14:21:49.274610  649678 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1006 14:21:49.274625  649678 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1006 14:21:49.274633  649678 command_runner.go:130] > # selinux = false
	I1006 14:21:49.274646  649678 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1006 14:21:49.274658  649678 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1006 14:21:49.274677  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274687  649678 command_runner.go:130] > # seccomp_profile = ""
	I1006 14:21:49.274698  649678 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1006 14:21:49.274707  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274715  649678 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1006 14:21:49.274733  649678 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1006 14:21:49.274744  649678 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1006 14:21:49.274754  649678 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1006 14:21:49.274768  649678 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1006 14:21:49.274776  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274784  649678 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1006 14:21:49.274794  649678 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1006 14:21:49.274802  649678 command_runner.go:130] > # the cgroup blockio controller.
	I1006 14:21:49.274809  649678 command_runner.go:130] > # blockio_config_file = ""
	I1006 14:21:49.274820  649678 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1006 14:21:49.274828  649678 command_runner.go:130] > # blockio parameters.
	I1006 14:21:49.274840  649678 command_runner.go:130] > # blockio_reload = false
	I1006 14:21:49.274849  649678 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1006 14:21:49.274856  649678 command_runner.go:130] > # irqbalance daemon.
	I1006 14:21:49.274870  649678 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1006 14:21:49.274886  649678 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1006 14:21:49.274901  649678 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1006 14:21:49.274915  649678 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1006 14:21:49.274927  649678 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1006 14:21:49.274933  649678 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1006 14:21:49.274941  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274945  649678 command_runner.go:130] > # rdt_config_file = ""
	I1006 14:21:49.274950  649678 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1006 14:21:49.274955  649678 command_runner.go:130] > # cgroup_manager = "systemd"
	I1006 14:21:49.274962  649678 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1006 14:21:49.274968  649678 command_runner.go:130] > # separate_pull_cgroup = ""
	I1006 14:21:49.274974  649678 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1006 14:21:49.274982  649678 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1006 14:21:49.274986  649678 command_runner.go:130] > # will be added.
	I1006 14:21:49.274991  649678 command_runner.go:130] > # default_capabilities = [
	I1006 14:21:49.274994  649678 command_runner.go:130] > # 	"CHOWN",
	I1006 14:21:49.274998  649678 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1006 14:21:49.275001  649678 command_runner.go:130] > # 	"FSETID",
	I1006 14:21:49.275004  649678 command_runner.go:130] > # 	"FOWNER",
	I1006 14:21:49.275008  649678 command_runner.go:130] > # 	"SETGID",
	I1006 14:21:49.275026  649678 command_runner.go:130] > # 	"SETUID",
	I1006 14:21:49.275033  649678 command_runner.go:130] > # 	"SETPCAP",
	I1006 14:21:49.275037  649678 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1006 14:21:49.275040  649678 command_runner.go:130] > # 	"KILL",
	I1006 14:21:49.275043  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275051  649678 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1006 14:21:49.275059  649678 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1006 14:21:49.275064  649678 command_runner.go:130] > # add_inheritable_capabilities = false
	I1006 14:21:49.275071  649678 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1006 14:21:49.275077  649678 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1006 14:21:49.275083  649678 command_runner.go:130] > default_sysctls = [
	I1006 14:21:49.275087  649678 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1006 14:21:49.275090  649678 command_runner.go:130] > ]
	I1006 14:21:49.275096  649678 command_runner.go:130] > # List of devices on the host that a
	I1006 14:21:49.275104  649678 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1006 14:21:49.275109  649678 command_runner.go:130] > # allowed_devices = [
	I1006 14:21:49.275122  649678 command_runner.go:130] > # 	"/dev/fuse",
	I1006 14:21:49.275128  649678 command_runner.go:130] > # 	"/dev/net/tun",
	I1006 14:21:49.275132  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275136  649678 command_runner.go:130] > # List of additional devices. specified as
	I1006 14:21:49.275146  649678 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1006 14:21:49.275151  649678 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1006 14:21:49.275156  649678 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1006 14:21:49.275162  649678 command_runner.go:130] > # additional_devices = [
	I1006 14:21:49.275166  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275170  649678 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1006 14:21:49.275176  649678 command_runner.go:130] > # cdi_spec_dirs = [
	I1006 14:21:49.275180  649678 command_runner.go:130] > # 	"/etc/cdi",
	I1006 14:21:49.275184  649678 command_runner.go:130] > # 	"/var/run/cdi",
	I1006 14:21:49.275189  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275195  649678 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1006 14:21:49.275216  649678 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1006 14:21:49.275225  649678 command_runner.go:130] > # Defaults to false.
	I1006 14:21:49.275239  649678 command_runner.go:130] > # device_ownership_from_security_context = false
	I1006 14:21:49.275249  649678 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1006 14:21:49.275255  649678 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1006 14:21:49.275262  649678 command_runner.go:130] > # hooks_dir = [
	I1006 14:21:49.275267  649678 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1006 14:21:49.275273  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275278  649678 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1006 14:21:49.275284  649678 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1006 14:21:49.275292  649678 command_runner.go:130] > # its default mounts from the following two files:
	I1006 14:21:49.275295  649678 command_runner.go:130] > #
	I1006 14:21:49.275300  649678 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1006 14:21:49.275309  649678 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1006 14:21:49.275315  649678 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1006 14:21:49.275328  649678 command_runner.go:130] > #
	I1006 14:21:49.275338  649678 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1006 14:21:49.275345  649678 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1006 14:21:49.275353  649678 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1006 14:21:49.275358  649678 command_runner.go:130] > #      only add mounts it finds in this file.
	I1006 14:21:49.275364  649678 command_runner.go:130] > #
	I1006 14:21:49.275370  649678 command_runner.go:130] > # default_mounts_file = ""
	I1006 14:21:49.275378  649678 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1006 14:21:49.275385  649678 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1006 14:21:49.275391  649678 command_runner.go:130] > # pids_limit = -1
	I1006 14:21:49.275398  649678 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1006 14:21:49.275406  649678 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1006 14:21:49.275412  649678 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1006 14:21:49.275420  649678 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1006 14:21:49.275426  649678 command_runner.go:130] > # log_size_max = -1
	I1006 14:21:49.275433  649678 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1006 14:21:49.275439  649678 command_runner.go:130] > # log_to_journald = false
	I1006 14:21:49.275445  649678 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1006 14:21:49.275452  649678 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1006 14:21:49.275457  649678 command_runner.go:130] > # Path to directory for container attach sockets.
	I1006 14:21:49.275463  649678 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1006 14:21:49.275467  649678 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1006 14:21:49.275474  649678 command_runner.go:130] > # bind_mount_prefix = ""
	I1006 14:21:49.275479  649678 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1006 14:21:49.275485  649678 command_runner.go:130] > # read_only = false
	I1006 14:21:49.275491  649678 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1006 14:21:49.275497  649678 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1006 14:21:49.275504  649678 command_runner.go:130] > # live configuration reload.
	I1006 14:21:49.275508  649678 command_runner.go:130] > # log_level = "info"
	I1006 14:21:49.275513  649678 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1006 14:21:49.275521  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.275525  649678 command_runner.go:130] > # log_filter = ""
	I1006 14:21:49.275530  649678 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1006 14:21:49.275542  649678 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1006 14:21:49.275549  649678 command_runner.go:130] > # separated by comma.
	I1006 14:21:49.275557  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275563  649678 command_runner.go:130] > # uid_mappings = ""
	I1006 14:21:49.275569  649678 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1006 14:21:49.275577  649678 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1006 14:21:49.275585  649678 command_runner.go:130] > # separated by comma.
	I1006 14:21:49.275594  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275598  649678 command_runner.go:130] > # gid_mappings = ""
	I1006 14:21:49.275606  649678 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1006 14:21:49.275614  649678 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1006 14:21:49.275621  649678 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1006 14:21:49.275630  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275634  649678 command_runner.go:130] > # minimum_mappable_uid = -1
	I1006 14:21:49.275640  649678 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1006 14:21:49.275648  649678 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1006 14:21:49.275654  649678 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1006 14:21:49.275664  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275668  649678 command_runner.go:130] > # minimum_mappable_gid = -1
	I1006 14:21:49.275676  649678 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1006 14:21:49.275683  649678 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1006 14:21:49.275690  649678 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1006 14:21:49.275694  649678 command_runner.go:130] > # ctr_stop_timeout = 30
	I1006 14:21:49.275700  649678 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1006 14:21:49.275706  649678 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1006 14:21:49.275711  649678 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1006 14:21:49.275718  649678 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1006 14:21:49.275722  649678 command_runner.go:130] > # drop_infra_ctr = true
	I1006 14:21:49.275731  649678 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1006 14:21:49.275736  649678 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1006 14:21:49.275746  649678 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1006 14:21:49.275752  649678 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1006 14:21:49.275759  649678 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1006 14:21:49.275772  649678 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1006 14:21:49.275778  649678 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1006 14:21:49.275786  649678 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1006 14:21:49.275790  649678 command_runner.go:130] > # shared_cpuset = ""
	I1006 14:21:49.275800  649678 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1006 14:21:49.275805  649678 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1006 14:21:49.275811  649678 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1006 14:21:49.275817  649678 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1006 14:21:49.275824  649678 command_runner.go:130] > # pinns_path = ""
	I1006 14:21:49.275829  649678 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1006 14:21:49.275838  649678 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1006 14:21:49.275842  649678 command_runner.go:130] > # enable_criu_support = true
	I1006 14:21:49.275849  649678 command_runner.go:130] > # Enable/disable the generation of the container,
	I1006 14:21:49.275855  649678 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1006 14:21:49.275859  649678 command_runner.go:130] > # enable_pod_events = false
	I1006 14:21:49.275865  649678 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1006 14:21:49.275872  649678 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1006 14:21:49.275876  649678 command_runner.go:130] > # default_runtime = "crun"
	I1006 14:21:49.275880  649678 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1006 14:21:49.275887  649678 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1006 14:21:49.275898  649678 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1006 14:21:49.275906  649678 command_runner.go:130] > # creation as a file is not desired either.
	I1006 14:21:49.275914  649678 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1006 14:21:49.275921  649678 command_runner.go:130] > # the hostname is being managed dynamically.
	I1006 14:21:49.275925  649678 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1006 14:21:49.275930  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275936  649678 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1006 14:21:49.275945  649678 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1006 14:21:49.275951  649678 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1006 14:21:49.275955  649678 command_runner.go:130] > # Each entry in the table should follow the format:
	I1006 14:21:49.275961  649678 command_runner.go:130] > #
	I1006 14:21:49.275965  649678 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1006 14:21:49.275969  649678 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1006 14:21:49.275980  649678 command_runner.go:130] > # runtime_type = "oci"
	I1006 14:21:49.275988  649678 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1006 14:21:49.275993  649678 command_runner.go:130] > # inherit_default_runtime = false
	I1006 14:21:49.275997  649678 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1006 14:21:49.276002  649678 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1006 14:21:49.276009  649678 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1006 14:21:49.276013  649678 command_runner.go:130] > # monitor_env = []
	I1006 14:21:49.276020  649678 command_runner.go:130] > # privileged_without_host_devices = false
	I1006 14:21:49.276024  649678 command_runner.go:130] > # allowed_annotations = []
	I1006 14:21:49.276029  649678 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1006 14:21:49.276035  649678 command_runner.go:130] > # no_sync_log = false
	I1006 14:21:49.276039  649678 command_runner.go:130] > # default_annotations = {}
	I1006 14:21:49.276044  649678 command_runner.go:130] > # stream_websockets = false
	I1006 14:21:49.276052  649678 command_runner.go:130] > # seccomp_profile = ""
	I1006 14:21:49.276074  649678 command_runner.go:130] > # Where:
	I1006 14:21:49.276087  649678 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1006 14:21:49.276100  649678 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1006 14:21:49.276111  649678 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1006 14:21:49.276124  649678 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1006 14:21:49.276128  649678 command_runner.go:130] > #   in $PATH.
	I1006 14:21:49.276137  649678 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1006 14:21:49.276141  649678 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1006 14:21:49.276149  649678 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1006 14:21:49.276153  649678 command_runner.go:130] > #   state.
	I1006 14:21:49.276159  649678 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1006 14:21:49.276165  649678 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1006 14:21:49.276173  649678 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1006 14:21:49.276179  649678 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1006 14:21:49.276186  649678 command_runner.go:130] > #   the values from the default runtime on load time.
	I1006 14:21:49.276193  649678 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1006 14:21:49.276200  649678 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1006 14:21:49.276242  649678 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1006 14:21:49.276258  649678 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1006 14:21:49.276269  649678 command_runner.go:130] > #   The currently recognized values are:
	I1006 14:21:49.276276  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1006 14:21:49.276286  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1006 14:21:49.276294  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1006 14:21:49.276300  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1006 14:21:49.276308  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1006 14:21:49.276314  649678 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1006 14:21:49.276323  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1006 14:21:49.276330  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1006 14:21:49.276338  649678 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1006 14:21:49.276344  649678 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1006 14:21:49.276353  649678 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1006 14:21:49.276359  649678 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1006 14:21:49.276370  649678 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1006 14:21:49.276380  649678 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1006 14:21:49.276386  649678 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1006 14:21:49.276396  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1006 14:21:49.276402  649678 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1006 14:21:49.276409  649678 command_runner.go:130] > #   deprecated option "conmon".
	I1006 14:21:49.276416  649678 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1006 14:21:49.276423  649678 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1006 14:21:49.276429  649678 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1006 14:21:49.276437  649678 command_runner.go:130] > #   should be moved to the container's cgroup
	I1006 14:21:49.276444  649678 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1006 14:21:49.276451  649678 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1006 14:21:49.276459  649678 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1006 14:21:49.276465  649678 command_runner.go:130] > #   conmon-rs by using:
	I1006 14:21:49.276472  649678 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1006 14:21:49.276481  649678 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1006 14:21:49.276488  649678 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1006 14:21:49.276494  649678 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1006 14:21:49.276502  649678 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1006 14:21:49.276509  649678 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1006 14:21:49.276519  649678 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1006 14:21:49.276524  649678 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1006 14:21:49.276534  649678 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1006 14:21:49.276543  649678 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1006 14:21:49.276551  649678 command_runner.go:130] > #   when a machine crash happens.
	I1006 14:21:49.276558  649678 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1006 14:21:49.276568  649678 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1006 14:21:49.276576  649678 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1006 14:21:49.276583  649678 command_runner.go:130] > #   seccomp profile for the runtime.
	I1006 14:21:49.276589  649678 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1006 14:21:49.276598  649678 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1006 14:21:49.276601  649678 command_runner.go:130] > #
	I1006 14:21:49.276605  649678 command_runner.go:130] > # Using the seccomp notifier feature:
	I1006 14:21:49.276610  649678 command_runner.go:130] > #
	I1006 14:21:49.276617  649678 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1006 14:21:49.276626  649678 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1006 14:21:49.276629  649678 command_runner.go:130] > #
	I1006 14:21:49.276635  649678 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1006 14:21:49.276643  649678 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1006 14:21:49.276646  649678 command_runner.go:130] > #
	I1006 14:21:49.276655  649678 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1006 14:21:49.276664  649678 command_runner.go:130] > # feature.
	I1006 14:21:49.276670  649678 command_runner.go:130] > #
	I1006 14:21:49.276684  649678 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1006 14:21:49.276693  649678 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1006 14:21:49.276700  649678 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1006 14:21:49.276708  649678 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1006 14:21:49.276714  649678 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1006 14:21:49.276720  649678 command_runner.go:130] > #
	I1006 14:21:49.276726  649678 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1006 14:21:49.276734  649678 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1006 14:21:49.276737  649678 command_runner.go:130] > #
	I1006 14:21:49.276745  649678 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1006 14:21:49.276765  649678 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1006 14:21:49.276775  649678 command_runner.go:130] > #
	I1006 14:21:49.276785  649678 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1006 14:21:49.276795  649678 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1006 14:21:49.276798  649678 command_runner.go:130] > # limitation.
	I1006 14:21:49.276802  649678 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1006 14:21:49.276807  649678 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1006 14:21:49.276815  649678 command_runner.go:130] > runtime_type = ""
	I1006 14:21:49.276822  649678 command_runner.go:130] > runtime_root = "/run/crun"
	I1006 14:21:49.276833  649678 command_runner.go:130] > inherit_default_runtime = false
	I1006 14:21:49.276841  649678 command_runner.go:130] > runtime_config_path = ""
	I1006 14:21:49.276851  649678 command_runner.go:130] > container_min_memory = ""
	I1006 14:21:49.276860  649678 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1006 14:21:49.276871  649678 command_runner.go:130] > monitor_cgroup = "pod"
	I1006 14:21:49.276877  649678 command_runner.go:130] > monitor_exec_cgroup = ""
	I1006 14:21:49.276883  649678 command_runner.go:130] > allowed_annotations = [
	I1006 14:21:49.276890  649678 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1006 14:21:49.276896  649678 command_runner.go:130] > ]
	I1006 14:21:49.276902  649678 command_runner.go:130] > privileged_without_host_devices = false
	I1006 14:21:49.276909  649678 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1006 14:21:49.276916  649678 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1006 14:21:49.276922  649678 command_runner.go:130] > runtime_type = ""
	I1006 14:21:49.276929  649678 command_runner.go:130] > runtime_root = "/run/runc"
	I1006 14:21:49.276936  649678 command_runner.go:130] > inherit_default_runtime = false
	I1006 14:21:49.276946  649678 command_runner.go:130] > runtime_config_path = ""
	I1006 14:21:49.276954  649678 command_runner.go:130] > container_min_memory = ""
	I1006 14:21:49.276967  649678 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1006 14:21:49.276978  649678 command_runner.go:130] > monitor_cgroup = "pod"
	I1006 14:21:49.276984  649678 command_runner.go:130] > monitor_exec_cgroup = ""
	I1006 14:21:49.276991  649678 command_runner.go:130] > privileged_without_host_devices = false
	I1006 14:21:49.276998  649678 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1006 14:21:49.277005  649678 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1006 14:21:49.277012  649678 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1006 14:21:49.277036  649678 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1006 14:21:49.277057  649678 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1006 14:21:49.277077  649678 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1006 14:21:49.277093  649678 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1006 14:21:49.277104  649678 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1006 14:21:49.277125  649678 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1006 14:21:49.277141  649678 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1006 14:21:49.277151  649678 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1006 14:21:49.277167  649678 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1006 14:21:49.277177  649678 command_runner.go:130] > # Example:
	I1006 14:21:49.277189  649678 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1006 14:21:49.277201  649678 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1006 14:21:49.277225  649678 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1006 14:21:49.277238  649678 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1006 14:21:49.277249  649678 command_runner.go:130] > # cpuset = "0-1"
	I1006 14:21:49.277260  649678 command_runner.go:130] > # cpushares = "5"
	I1006 14:21:49.277270  649678 command_runner.go:130] > # cpuquota = "1000"
	I1006 14:21:49.277281  649678 command_runner.go:130] > # cpuperiod = "100000"
	I1006 14:21:49.277292  649678 command_runner.go:130] > # cpulimit = "35"
	I1006 14:21:49.277300  649678 command_runner.go:130] > # Where:
	I1006 14:21:49.277307  649678 command_runner.go:130] > # The workload name is workload-type.
	I1006 14:21:49.277323  649678 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1006 14:21:49.277336  649678 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1006 14:21:49.277349  649678 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1006 14:21:49.277366  649678 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1006 14:21:49.277381  649678 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1006 14:21:49.277393  649678 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1006 14:21:49.277406  649678 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1006 14:21:49.277416  649678 command_runner.go:130] > # Default value is set to true
	I1006 14:21:49.277427  649678 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1006 14:21:49.277441  649678 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1006 14:21:49.277453  649678 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1006 14:21:49.277465  649678 command_runner.go:130] > # Default value is set to 'false'
	I1006 14:21:49.277479  649678 command_runner.go:130] > # disable_hostport_mapping = false
	I1006 14:21:49.277492  649678 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1006 14:21:49.277513  649678 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1006 14:21:49.277521  649678 command_runner.go:130] > # timezone = ""
	I1006 14:21:49.277531  649678 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1006 14:21:49.277536  649678 command_runner.go:130] > #
	I1006 14:21:49.277547  649678 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1006 14:21:49.277557  649678 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1006 14:21:49.277565  649678 command_runner.go:130] > [crio.image]
	I1006 14:21:49.277578  649678 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1006 14:21:49.277589  649678 command_runner.go:130] > # default_transport = "docker://"
	I1006 14:21:49.277603  649678 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1006 14:21:49.277617  649678 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1006 14:21:49.277627  649678 command_runner.go:130] > # global_auth_file = ""
	I1006 14:21:49.277652  649678 command_runner.go:130] > # The image used to instantiate infra containers.
	I1006 14:21:49.277665  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.277675  649678 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1006 14:21:49.277690  649678 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1006 14:21:49.277704  649678 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1006 14:21:49.277715  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.277730  649678 command_runner.go:130] > # pause_image_auth_file = ""
	I1006 14:21:49.277741  649678 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1006 14:21:49.277755  649678 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1006 14:21:49.277770  649678 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1006 14:21:49.277785  649678 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1006 14:21:49.277796  649678 command_runner.go:130] > # pause_command = "/pause"
	I1006 14:21:49.277811  649678 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1006 14:21:49.277824  649678 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1006 14:21:49.277838  649678 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1006 14:21:49.277851  649678 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1006 14:21:49.277864  649678 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1006 14:21:49.277879  649678 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1006 14:21:49.277889  649678 command_runner.go:130] > # pinned_images = [
	I1006 14:21:49.277904  649678 command_runner.go:130] > # ]
	I1006 14:21:49.277918  649678 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1006 14:21:49.277929  649678 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1006 14:21:49.277943  649678 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1006 14:21:49.277957  649678 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1006 14:21:49.277969  649678 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1006 14:21:49.277982  649678 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1006 14:21:49.277994  649678 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1006 14:21:49.278013  649678 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1006 14:21:49.278025  649678 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1006 14:21:49.278042  649678 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1006 14:21:49.278056  649678 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1006 14:21:49.278069  649678 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1006 14:21:49.278083  649678 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1006 14:21:49.278099  649678 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1006 14:21:49.278109  649678 command_runner.go:130] > # changing them here.
	I1006 14:21:49.278127  649678 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1006 14:21:49.278138  649678 command_runner.go:130] > # insecure_registries = [
	I1006 14:21:49.278148  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278163  649678 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1006 14:21:49.278181  649678 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1006 14:21:49.278192  649678 command_runner.go:130] > # image_volumes = "mkdir"
	I1006 14:21:49.278214  649678 command_runner.go:130] > # Temporary directory to use for storing big files
	I1006 14:21:49.278227  649678 command_runner.go:130] > # big_files_temporary_dir = ""
	I1006 14:21:49.278237  649678 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1006 14:21:49.278253  649678 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1006 14:21:49.278265  649678 command_runner.go:130] > # auto_reload_registries = false
	I1006 14:21:49.278278  649678 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1006 14:21:49.278294  649678 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1006 14:21:49.278307  649678 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1006 14:21:49.278317  649678 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1006 14:21:49.278329  649678 command_runner.go:130] > # The mode of short name resolution.
	I1006 14:21:49.278343  649678 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1006 14:21:49.278364  649678 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1006 14:21:49.278377  649678 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1006 14:21:49.278389  649678 command_runner.go:130] > # short_name_mode = "enforcing"
	I1006 14:21:49.278403  649678 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1006 14:21:49.278414  649678 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1006 14:21:49.278425  649678 command_runner.go:130] > # oci_artifact_mount_support = true
	I1006 14:21:49.278440  649678 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1006 14:21:49.278450  649678 command_runner.go:130] > # CNI plugins.
	I1006 14:21:49.278460  649678 command_runner.go:130] > [crio.network]
	I1006 14:21:49.278474  649678 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1006 14:21:49.278486  649678 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1006 14:21:49.278497  649678 command_runner.go:130] > # cni_default_network = ""
	I1006 14:21:49.278508  649678 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1006 14:21:49.278519  649678 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1006 14:21:49.278532  649678 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1006 14:21:49.278543  649678 command_runner.go:130] > # plugin_dirs = [
	I1006 14:21:49.278554  649678 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1006 14:21:49.278563  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278574  649678 command_runner.go:130] > # List of included pod metrics.
	I1006 14:21:49.278586  649678 command_runner.go:130] > # included_pod_metrics = [
	I1006 14:21:49.278594  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278605  649678 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1006 14:21:49.278615  649678 command_runner.go:130] > [crio.metrics]
	I1006 14:21:49.278627  649678 command_runner.go:130] > # Globally enable or disable metrics support.
	I1006 14:21:49.278639  649678 command_runner.go:130] > # enable_metrics = false
	I1006 14:21:49.278651  649678 command_runner.go:130] > # Specify enabled metrics collectors.
	I1006 14:21:49.278662  649678 command_runner.go:130] > # Per default all metrics are enabled.
	I1006 14:21:49.278676  649678 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1006 14:21:49.278689  649678 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1006 14:21:49.278700  649678 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1006 14:21:49.278712  649678 command_runner.go:130] > # metrics_collectors = [
	I1006 14:21:49.278718  649678 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1006 14:21:49.278727  649678 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1006 14:21:49.278740  649678 command_runner.go:130] > # 	"containers_oom_total",
	I1006 14:21:49.278747  649678 command_runner.go:130] > # 	"processes_defunct",
	I1006 14:21:49.278754  649678 command_runner.go:130] > # 	"operations_total",
	I1006 14:21:49.278761  649678 command_runner.go:130] > # 	"operations_latency_seconds",
	I1006 14:21:49.278769  649678 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1006 14:21:49.278776  649678 command_runner.go:130] > # 	"operations_errors_total",
	I1006 14:21:49.278786  649678 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1006 14:21:49.278798  649678 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1006 14:21:49.278810  649678 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1006 14:21:49.278822  649678 command_runner.go:130] > # 	"image_pulls_success_total",
	I1006 14:21:49.278833  649678 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1006 14:21:49.278844  649678 command_runner.go:130] > # 	"containers_oom_count_total",
	I1006 14:21:49.278856  649678 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1006 14:21:49.278867  649678 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1006 14:21:49.278878  649678 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1006 14:21:49.278886  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278896  649678 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1006 14:21:49.278907  649678 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1006 14:21:49.278916  649678 command_runner.go:130] > # The port on which the metrics server will listen.
	I1006 14:21:49.278927  649678 command_runner.go:130] > # metrics_port = 9090
	I1006 14:21:49.278939  649678 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1006 14:21:49.278950  649678 command_runner.go:130] > # metrics_socket = ""
	I1006 14:21:49.278962  649678 command_runner.go:130] > # The certificate for the secure metrics server.
	I1006 14:21:49.278975  649678 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1006 14:21:49.278986  649678 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1006 14:21:49.278998  649678 command_runner.go:130] > # certificate on any modification event.
	I1006 14:21:49.279009  649678 command_runner.go:130] > # metrics_cert = ""
	I1006 14:21:49.279018  649678 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1006 14:21:49.279031  649678 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1006 14:21:49.279042  649678 command_runner.go:130] > # metrics_key = ""
	I1006 14:21:49.279054  649678 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1006 14:21:49.279065  649678 command_runner.go:130] > [crio.tracing]
	I1006 14:21:49.279078  649678 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1006 14:21:49.279088  649678 command_runner.go:130] > # enable_tracing = false
	I1006 14:21:49.279100  649678 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1006 14:21:49.279118  649678 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1006 14:21:49.279133  649678 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1006 14:21:49.279145  649678 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1006 14:21:49.279155  649678 command_runner.go:130] > # CRI-O NRI configuration.
	I1006 14:21:49.279165  649678 command_runner.go:130] > [crio.nri]
	I1006 14:21:49.279176  649678 command_runner.go:130] > # Globally enable or disable NRI.
	I1006 14:21:49.279185  649678 command_runner.go:130] > # enable_nri = true
	I1006 14:21:49.279195  649678 command_runner.go:130] > # NRI socket to listen on.
	I1006 14:21:49.279220  649678 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1006 14:21:49.279232  649678 command_runner.go:130] > # NRI plugin directory to use.
	I1006 14:21:49.279239  649678 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1006 14:21:49.279251  649678 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1006 14:21:49.279263  649678 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1006 14:21:49.279276  649678 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1006 14:21:49.279348  649678 command_runner.go:130] > # nri_disable_connections = false
	I1006 14:21:49.279363  649678 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1006 14:21:49.279371  649678 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1006 14:21:49.279381  649678 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1006 14:21:49.279393  649678 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1006 14:21:49.279404  649678 command_runner.go:130] > # NRI default validator configuration.
	I1006 14:21:49.279420  649678 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1006 14:21:49.279434  649678 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1006 14:21:49.279445  649678 command_runner.go:130] > # can be restricted/rejected:
	I1006 14:21:49.279455  649678 command_runner.go:130] > # - OCI hook injection
	I1006 14:21:49.279467  649678 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1006 14:21:49.279479  649678 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1006 14:21:49.279488  649678 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1006 14:21:49.279499  649678 command_runner.go:130] > # - adjustment of linux namespaces
	I1006 14:21:49.279513  649678 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1006 14:21:49.279528  649678 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1006 14:21:49.279541  649678 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1006 14:21:49.279550  649678 command_runner.go:130] > #
	I1006 14:21:49.279561  649678 command_runner.go:130] > # [crio.nri.default_validator]
	I1006 14:21:49.279574  649678 command_runner.go:130] > # nri_enable_default_validator = false
	I1006 14:21:49.279587  649678 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1006 14:21:49.279600  649678 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1006 14:21:49.279613  649678 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1006 14:21:49.279626  649678 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1006 14:21:49.279636  649678 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1006 14:21:49.279646  649678 command_runner.go:130] > # nri_validator_required_plugins = [
	I1006 14:21:49.279656  649678 command_runner.go:130] > # ]
	I1006 14:21:49.279668  649678 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1006 14:21:49.279681  649678 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1006 14:21:49.279691  649678 command_runner.go:130] > [crio.stats]
	I1006 14:21:49.279704  649678 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1006 14:21:49.279717  649678 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1006 14:21:49.279728  649678 command_runner.go:130] > # stats_collection_period = 0
	I1006 14:21:49.279739  649678 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1006 14:21:49.279753  649678 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1006 14:21:49.279764  649678 command_runner.go:130] > # collection_period = 0
	I1006 14:21:49.279811  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258239123Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1006 14:21:49.279828  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258265766Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1006 14:21:49.279842  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258283938Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1006 14:21:49.279857  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.25830256Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1006 14:21:49.279875  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258357499Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:49.279892  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258517334Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1006 14:21:49.279912  649678 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1006 14:21:49.280045  649678 cni.go:84] Creating CNI manager for ""
	I1006 14:21:49.280059  649678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:21:49.280078  649678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:21:49.280122  649678 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-135520 NodeName:functional-135520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:21:49.280303  649678 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-135520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:21:49.280384  649678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:21:49.288800  649678 command_runner.go:130] > kubeadm
	I1006 14:21:49.288826  649678 command_runner.go:130] > kubectl
	I1006 14:21:49.288833  649678 command_runner.go:130] > kubelet
	I1006 14:21:49.288864  649678 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:21:49.288912  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:21:49.296476  649678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 14:21:49.308883  649678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:21:49.321172  649678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1006 14:21:49.333376  649678 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:21:49.336963  649678 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1006 14:21:49.337019  649678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:49.424422  649678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:21:49.437476  649678 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520 for IP: 192.168.49.2
	I1006 14:21:49.437505  649678 certs.go:195] generating shared ca certs ...
	I1006 14:21:49.437527  649678 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:49.437678  649678 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:21:49.437730  649678 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:21:49.437748  649678 certs.go:257] generating profile certs ...
	I1006 14:21:49.437847  649678 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key
	I1006 14:21:49.437896  649678 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key.72a46e8e
	I1006 14:21:49.437936  649678 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key
	I1006 14:21:49.437949  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:21:49.437963  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:21:49.437984  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:21:49.438003  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:21:49.438018  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:21:49.438035  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:21:49.438049  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:21:49.438064  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:21:49.438123  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:21:49.438160  649678 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:21:49.438171  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:21:49.438196  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:21:49.438246  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:21:49.438271  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:21:49.438316  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:21:49.438344  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.438359  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.438381  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.439032  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:21:49.456437  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:21:49.473578  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:21:49.490593  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:21:49.508347  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:21:49.525339  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:21:49.541997  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:21:49.558467  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:21:49.576359  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:21:49.593578  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:21:49.610863  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:21:49.628123  649678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:21:49.640270  649678 ssh_runner.go:195] Run: openssl version
	I1006 14:21:49.646279  649678 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1006 14:21:49.646391  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:21:49.654553  649678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.658110  649678 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.658254  649678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.658303  649678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.692318  649678 command_runner.go:130] > 3ec20f2e
	I1006 14:21:49.692406  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:21:49.700814  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:21:49.709140  649678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.712721  649678 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.712738  649678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.712772  649678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.745663  649678 command_runner.go:130] > b5213941
	I1006 14:21:49.745998  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:21:49.754083  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:21:49.762664  649678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.766415  649678 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.766461  649678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.766502  649678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.800644  649678 command_runner.go:130] > 51391683
	I1006 14:21:49.800985  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:21:49.809049  649678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:21:49.812721  649678 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:21:49.812776  649678 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1006 14:21:49.812784  649678 command_runner.go:130] > Device: 8,1	Inode: 580300      Links: 1
	I1006 14:21:49.812793  649678 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 14:21:49.812800  649678 command_runner.go:130] > Access: 2025-10-06 14:17:42.533320203 +0000
	I1006 14:21:49.812811  649678 command_runner.go:130] > Modify: 2025-10-06 14:13:37.457627952 +0000
	I1006 14:21:49.812819  649678 command_runner.go:130] > Change: 2025-10-06 14:13:37.457627952 +0000
	I1006 14:21:49.812829  649678 command_runner.go:130] >  Birth: 2025-10-06 14:13:37.457627952 +0000
	I1006 14:21:49.812886  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:21:49.846896  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.847277  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:21:49.881096  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.881431  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:21:49.916333  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.916837  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:21:49.951128  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.951323  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:21:49.984919  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.985255  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:21:50.018710  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:50.018987  649678 kubeadm.go:400] StartCluster: {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:50.019061  649678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:21:50.019118  649678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:21:50.047552  649678 cri.go:89] found id: ""
	I1006 14:21:50.047624  649678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:21:50.055103  649678 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1006 14:21:50.055125  649678 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1006 14:21:50.055137  649678 command_runner.go:130] > /var/lib/minikube/etcd:
	I1006 14:21:50.055780  649678 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:21:50.055795  649678 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:21:50.055835  649678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:21:50.063106  649678 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:21:50.063218  649678 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-135520" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.063263  649678 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-626179/kubeconfig needs updating (will repair): [kubeconfig missing "functional-135520" cluster setting kubeconfig missing "functional-135520" context setting]
	I1006 14:21:50.063581  649678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:50.064282  649678 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.064435  649678 kapi.go:59] client config for functional-135520: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:21:50.064874  649678 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 14:21:50.064894  649678 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 14:21:50.064898  649678 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 14:21:50.064902  649678 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 14:21:50.064906  649678 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 14:21:50.064950  649678 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1006 14:21:50.065393  649678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:21:50.072886  649678 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1006 14:21:50.072922  649678 kubeadm.go:601] duration metric: took 17.120794ms to restartPrimaryControlPlane
	I1006 14:21:50.072932  649678 kubeadm.go:402] duration metric: took 53.951913ms to StartCluster
	I1006 14:21:50.072948  649678 settings.go:142] acquiring lock: {Name:mk49b10f71f24d1f54d5c453b3b04e717e9a9100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:50.073763  649678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.074346  649678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:50.074579  649678 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:21:50.074661  649678 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 14:21:50.074799  649678 addons.go:69] Setting storage-provisioner=true in profile "functional-135520"
	I1006 14:21:50.074825  649678 addons.go:238] Setting addon storage-provisioner=true in "functional-135520"
	I1006 14:21:50.074761  649678 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:21:50.074866  649678 addons.go:69] Setting default-storageclass=true in profile "functional-135520"
	I1006 14:21:50.074859  649678 host.go:66] Checking if "functional-135520" exists ...
	I1006 14:21:50.074881  649678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-135520"
	I1006 14:21:50.075174  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:50.075488  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:50.077233  649678 out.go:179] * Verifying Kubernetes components...
	I1006 14:21:50.078370  649678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:50.095495  649678 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.095656  649678 kapi.go:59] client config for functional-135520: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:21:50.095938  649678 addons.go:238] Setting addon default-storageclass=true in "functional-135520"
	I1006 14:21:50.095974  649678 host.go:66] Checking if "functional-135520" exists ...
	I1006 14:21:50.096327  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:50.100068  649678 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:21:50.101767  649678 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.101786  649678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:21:50.101831  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:50.122986  649678 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.123017  649678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:21:50.123083  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:50.128190  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:50.141305  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:50.171892  649678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:21:50.185683  649678 node_ready.go:35] waiting up to 6m0s for node "functional-135520" to be "Ready" ...
	I1006 14:21:50.185842  649678 type.go:168] "Request Body" body=""
	I1006 14:21:50.185906  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:50.186211  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:50.238569  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.250369  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.297302  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.297371  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.297421  649678 retry.go:31] will retry after 341.445316ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.306094  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.306137  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.306156  649678 retry.go:31] will retry after 289.440052ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.596773  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.639555  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.652478  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.652547  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.652572  649678 retry.go:31] will retry after 276.474886ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.686728  649678 type.go:168] "Request Body" body=""
	I1006 14:21:50.686820  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:50.687192  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:50.696244  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.696297  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.696320  649678 retry.go:31] will retry after 208.115159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.904724  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.929427  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.961651  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.961718  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.961741  649678 retry.go:31] will retry after 526.763649ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.984274  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.988765  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.988799  649678 retry.go:31] will retry after 299.40846ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.186119  649678 type.go:168] "Request Body" body=""
	I1006 14:21:51.186232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:51.186600  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:51.288897  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:51.344296  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:51.344362  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.344390  649678 retry.go:31] will retry after 1.255489073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.489635  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:51.542509  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:51.545518  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.545558  649678 retry.go:31] will retry after 1.109395122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.686960  649678 type.go:168] "Request Body" body=""
	I1006 14:21:51.687044  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:51.687429  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:52.186098  649678 type.go:168] "Request Body" body=""
	I1006 14:21:52.186177  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:52.186579  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:52.186647  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:52.600133  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:52.654438  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:52.654496  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.654515  649678 retry.go:31] will retry after 1.609702337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.655551  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:52.686897  649678 type.go:168] "Request Body" body=""
	I1006 14:21:52.686998  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:52.687382  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:52.709517  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:52.709578  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.709602  649678 retry.go:31] will retry after 1.712984533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:53.186162  649678 type.go:168] "Request Body" body=""
	I1006 14:21:53.186283  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:53.186685  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:53.686305  649678 type.go:168] "Request Body" body=""
	I1006 14:21:53.686410  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:53.686778  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:54.186389  649678 type.go:168] "Request Body" body=""
	I1006 14:21:54.186497  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:54.186895  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:54.186974  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:54.265161  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:54.320415  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:54.320465  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.320484  649678 retry.go:31] will retry after 1.901708606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.423753  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:54.478522  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:54.478584  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.478619  649678 retry.go:31] will retry after 1.584586857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.685879  649678 type.go:168] "Request Body" body=""
	I1006 14:21:54.685954  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:54.686309  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:55.185880  649678 type.go:168] "Request Body" body=""
	I1006 14:21:55.185961  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:55.186309  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:55.685969  649678 type.go:168] "Request Body" body=""
	I1006 14:21:55.686071  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:55.686478  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:56.063981  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:56.118717  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:56.118774  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.118807  649678 retry.go:31] will retry after 2.733091815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.185931  649678 type.go:168] "Request Body" body=""
	I1006 14:21:56.186008  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:56.186344  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:56.222525  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:56.276120  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:56.276196  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.276235  649678 retry.go:31] will retry after 1.816128137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.686920  649678 type.go:168] "Request Body" body=""
	I1006 14:21:56.687009  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:56.687408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:56.687471  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:57.186225  649678 type.go:168] "Request Body" body=""
	I1006 14:21:57.186314  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:57.186655  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:57.686516  649678 type.go:168] "Request Body" body=""
	I1006 14:21:57.686601  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:57.686915  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:58.093526  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:58.148989  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:58.149041  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.149066  649678 retry.go:31] will retry after 2.492749577s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.186253  649678 type.go:168] "Request Body" body=""
	I1006 14:21:58.186345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:58.186702  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:58.686540  649678 type.go:168] "Request Body" body=""
	I1006 14:21:58.686625  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:58.686963  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:58.852333  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:58.907770  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:58.907811  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.907831  649678 retry.go:31] will retry after 3.408188619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:59.186242  649678 type.go:168] "Request Body" body=""
	I1006 14:21:59.186325  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:59.186705  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:59.186784  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:59.686631  649678 type.go:168] "Request Body" body=""
	I1006 14:21:59.686729  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:59.687112  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:00.185903  649678 type.go:168] "Request Body" body=""
	I1006 14:22:00.185998  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:00.186365  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:00.642984  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:00.686799  649678 type.go:168] "Request Body" body=""
	I1006 14:22:00.686880  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:00.687243  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:00.698375  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:00.698427  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:00.698448  649678 retry.go:31] will retry after 6.594317937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:01.186036  649678 type.go:168] "Request Body" body=""
	I1006 14:22:01.186143  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:01.186563  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:01.686476  649678 type.go:168] "Request Body" body=""
	I1006 14:22:01.686584  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:01.686981  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:01.687058  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:02.186608  649678 type.go:168] "Request Body" body=""
	I1006 14:22:02.186705  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:02.187061  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:02.316279  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:02.370200  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:02.373358  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:02.373390  649678 retry.go:31] will retry after 5.569612861s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:02.686858  649678 type.go:168] "Request Body" body=""
	I1006 14:22:02.686947  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:02.687350  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:03.185954  649678 type.go:168] "Request Body" body=""
	I1006 14:22:03.186035  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:03.186451  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:03.686069  649678 type.go:168] "Request Body" body=""
	I1006 14:22:03.686185  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:03.686679  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:04.186146  649678 type.go:168] "Request Body" body=""
	I1006 14:22:04.186265  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:04.186682  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:04.186759  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:04.686312  649678 type.go:168] "Request Body" body=""
	I1006 14:22:04.686448  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:04.686778  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:05.186355  649678 type.go:168] "Request Body" body=""
	I1006 14:22:05.186442  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:05.186804  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:05.686470  649678 type.go:168] "Request Body" body=""
	I1006 14:22:05.686548  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:05.686892  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:06.186409  649678 type.go:168] "Request Body" body=""
	I1006 14:22:06.186493  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:06.186841  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:06.186906  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:06.686653  649678 type.go:168] "Request Body" body=""
	I1006 14:22:06.686731  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:06.687077  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:07.186430  649678 type.go:168] "Request Body" body=""
	I1006 14:22:07.186515  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:07.186850  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:07.293062  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:07.347879  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:07.347938  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:07.347958  649678 retry.go:31] will retry after 11.599769479s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:07.686422  649678 type.go:168] "Request Body" body=""
	I1006 14:22:07.686519  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:07.686919  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:07.943325  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:07.994639  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:07.997627  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:07.997659  649678 retry.go:31] will retry after 6.982471195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:08.186017  649678 type.go:168] "Request Body" body=""
	I1006 14:22:08.186095  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:08.186523  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:08.686113  649678 type.go:168] "Request Body" body=""
	I1006 14:22:08.686234  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:08.686617  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:08.686693  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:09.186236  649678 type.go:168] "Request Body" body=""
	I1006 14:22:09.186345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:09.186717  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:09.686283  649678 type.go:168] "Request Body" body=""
	I1006 14:22:09.686365  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:09.686759  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:10.186558  649678 type.go:168] "Request Body" body=""
	I1006 14:22:10.186657  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:10.187046  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:10.686665  649678 type.go:168] "Request Body" body=""
	I1006 14:22:10.686743  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:10.687116  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:10.687244  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:11.186799  649678 type.go:168] "Request Body" body=""
	I1006 14:22:11.186892  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:11.187296  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:11.686074  649678 type.go:168] "Request Body" body=""
	I1006 14:22:11.686224  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:11.686586  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:12.186151  649678 type.go:168] "Request Body" body=""
	I1006 14:22:12.186305  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:12.186696  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:12.686260  649678 type.go:168] "Request Body" body=""
	I1006 14:22:12.686345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:12.686706  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:13.186307  649678 type.go:168] "Request Body" body=""
	I1006 14:22:13.186418  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:13.186788  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:13.186857  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:13.686381  649678 type.go:168] "Request Body" body=""
	I1006 14:22:13.686488  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:13.686854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:14.186497  649678 type.go:168] "Request Body" body=""
	I1006 14:22:14.186592  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:14.186941  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:14.686598  649678 type.go:168] "Request Body" body=""
	I1006 14:22:14.686682  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:14.687029  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:14.980397  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:15.034191  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:15.034263  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:15.034288  649678 retry.go:31] will retry after 12.004605903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:15.186550  649678 type.go:168] "Request Body" body=""
	I1006 14:22:15.186633  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:15.187020  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:15.187102  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:15.686717  649678 type.go:168] "Request Body" body=""
	I1006 14:22:15.686812  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:15.687196  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:16.186809  649678 type.go:168] "Request Body" body=""
	I1006 14:22:16.186884  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:16.187256  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:16.686013  649678 type.go:168] "Request Body" body=""
	I1006 14:22:16.686098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:16.686488  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:17.186068  649678 type.go:168] "Request Body" body=""
	I1006 14:22:17.186146  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:17.186573  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:17.686133  649678 type.go:168] "Request Body" body=""
	I1006 14:22:17.686253  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:17.686622  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:17.686699  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:18.186192  649678 type.go:168] "Request Body" body=""
	I1006 14:22:18.186295  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:18.186693  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:18.686281  649678 type.go:168] "Request Body" body=""
	I1006 14:22:18.686358  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:18.686685  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:18.948057  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:19.002723  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:19.002770  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:19.002791  649678 retry.go:31] will retry after 9.663618433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:19.186105  649678 type.go:168] "Request Body" body=""
	I1006 14:22:19.186250  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:19.186659  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:19.686518  649678 type.go:168] "Request Body" body=""
	I1006 14:22:19.686605  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:19.686939  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:19.687009  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:20.186860  649678 type.go:168] "Request Body" body=""
	I1006 14:22:20.186965  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:20.187367  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:20.686167  649678 type.go:168] "Request Body" body=""
	I1006 14:22:20.686275  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:20.686635  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:21.186460  649678 type.go:168] "Request Body" body=""
	I1006 14:22:21.186548  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:21.186942  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:21.686821  649678 type.go:168] "Request Body" body=""
	I1006 14:22:21.686902  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:21.687332  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:21.687397  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:22.186083  649678 type.go:168] "Request Body" body=""
	I1006 14:22:22.186166  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:22.186569  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:22.686397  649678 type.go:168] "Request Body" body=""
	I1006 14:22:22.686491  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:22.686903  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:23.186781  649678 type.go:168] "Request Body" body=""
	I1006 14:22:23.186870  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:23.187268  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:23.686042  649678 type.go:168] "Request Body" body=""
	I1006 14:22:23.686129  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:23.686575  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:24.186356  649678 type.go:168] "Request Body" body=""
	I1006 14:22:24.186489  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:24.186921  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:24.187013  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:24.686802  649678 type.go:168] "Request Body" body=""
	I1006 14:22:24.686904  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:24.687313  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:25.186100  649678 type.go:168] "Request Body" body=""
	I1006 14:22:25.186254  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:25.186644  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:25.686394  649678 type.go:168] "Request Body" body=""
	I1006 14:22:25.686478  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:25.686854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:26.186709  649678 type.go:168] "Request Body" body=""
	I1006 14:22:26.186843  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:26.187291  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:26.187357  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:26.686108  649678 type.go:168] "Request Body" body=""
	I1006 14:22:26.686232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:26.686608  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:27.039059  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:27.094007  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:27.097496  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:27.097534  649678 retry.go:31] will retry after 22.614868096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:27.186847  649678 type.go:168] "Request Body" body=""
	I1006 14:22:27.186925  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:27.187319  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:27.686152  649678 type.go:168] "Request Body" body=""
	I1006 14:22:27.686302  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:27.686651  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:28.186562  649678 type.go:168] "Request Body" body=""
	I1006 14:22:28.186655  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:28.187109  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:28.666677  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:28.686315  649678 type.go:168] "Request Body" body=""
	I1006 14:22:28.686424  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:28.686765  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:28.686846  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:28.722750  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:28.722794  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:28.722814  649678 retry.go:31] will retry after 11.553901016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:29.186360  649678 type.go:168] "Request Body" body=""
	I1006 14:22:29.186463  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:29.186854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:29.686594  649678 type.go:168] "Request Body" body=""
	I1006 14:22:29.686674  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:29.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:30.186847  649678 type.go:168] "Request Body" body=""
	I1006 14:22:30.186978  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:30.187394  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:30.685980  649678 type.go:168] "Request Body" body=""
	I1006 14:22:30.686063  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:30.686514  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:31.186103  649678 type.go:168] "Request Body" body=""
	I1006 14:22:31.186273  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:31.186671  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:31.186735  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:31.686585  649678 type.go:168] "Request Body" body=""
	I1006 14:22:31.686699  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:31.687091  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:32.186757  649678 type.go:168] "Request Body" body=""
	I1006 14:22:32.186864  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:32.187311  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:32.685887  649678 type.go:168] "Request Body" body=""
	I1006 14:22:32.685973  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:32.686388  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:33.186057  649678 type.go:168] "Request Body" body=""
	I1006 14:22:33.186156  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:33.186557  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:33.686144  649678 type.go:168] "Request Body" body=""
	I1006 14:22:33.686262  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:33.686648  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:33.686721  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:34.186259  649678 type.go:168] "Request Body" body=""
	I1006 14:22:34.186354  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:34.186737  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:34.686419  649678 type.go:168] "Request Body" body=""
	I1006 14:22:34.686498  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:34.686871  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:35.186497  649678 type.go:168] "Request Body" body=""
	I1006 14:22:35.186603  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:35.186980  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:35.686662  649678 type.go:168] "Request Body" body=""
	I1006 14:22:35.686763  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:35.687122  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:35.687197  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:36.186754  649678 type.go:168] "Request Body" body=""
	I1006 14:22:36.186848  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:36.187316  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:36.686164  649678 type.go:168] "Request Body" body=""
	I1006 14:22:36.686314  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:36.686722  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:37.186321  649678 type.go:168] "Request Body" body=""
	I1006 14:22:37.186420  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:37.186775  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:37.686633  649678 type.go:168] "Request Body" body=""
	I1006 14:22:37.686715  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:37.687101  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:38.185900  649678 type.go:168] "Request Body" body=""
	I1006 14:22:38.185994  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:38.186391  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:38.186465  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:38.686198  649678 type.go:168] "Request Body" body=""
	I1006 14:22:38.686309  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:38.686708  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:39.186526  649678 type.go:168] "Request Body" body=""
	I1006 14:22:39.186655  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:39.187049  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:39.685917  649678 type.go:168] "Request Body" body=""
	I1006 14:22:39.686005  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:39.686446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:40.186230  649678 type.go:168] "Request Body" body=""
	I1006 14:22:40.186337  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:40.186733  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:40.186801  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:40.276916  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:40.331801  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:40.335179  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:40.335232  649678 retry.go:31] will retry after 39.41387573s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:40.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:22:40.686899  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:40.687303  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:41.186091  649678 type.go:168] "Request Body" body=""
	I1006 14:22:41.186200  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:41.186603  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:41.686526  649678 type.go:168] "Request Body" body=""
	I1006 14:22:41.686626  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:41.687010  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:42.186887  649678 type.go:168] "Request Body" body=""
	I1006 14:22:42.186964  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:42.187345  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:42.187421  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:42.686150  649678 type.go:168] "Request Body" body=""
	I1006 14:22:42.686267  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:42.686658  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:43.186527  649678 type.go:168] "Request Body" body=""
	I1006 14:22:43.186614  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:43.186999  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:43.686820  649678 type.go:168] "Request Body" body=""
	I1006 14:22:43.686909  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:43.687318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:44.186096  649678 type.go:168] "Request Body" body=""
	I1006 14:22:44.186247  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:44.186640  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:44.686530  649678 type.go:168] "Request Body" body=""
	I1006 14:22:44.686615  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:44.687010  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:44.687087  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:45.186889  649678 type.go:168] "Request Body" body=""
	I1006 14:22:45.186975  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:45.187340  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:45.686094  649678 type.go:168] "Request Body" body=""
	I1006 14:22:45.686177  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:45.686579  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:46.186357  649678 type.go:168] "Request Body" body=""
	I1006 14:22:46.186468  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:46.186826  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:46.686734  649678 type.go:168] "Request Body" body=""
	I1006 14:22:46.686824  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:46.687252  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:46.687331  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:47.186069  649678 type.go:168] "Request Body" body=""
	I1006 14:22:47.186155  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:47.186586  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:47.686023  649678 type.go:168] "Request Body" body=""
	I1006 14:22:47.686126  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:47.686582  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:48.186406  649678 type.go:168] "Request Body" body=""
	I1006 14:22:48.186501  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:48.186908  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:48.686766  649678 type.go:168] "Request Body" body=""
	I1006 14:22:48.686850  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:48.687229  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:49.186033  649678 type.go:168] "Request Body" body=""
	I1006 14:22:49.186123  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:49.186550  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:49.186623  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:49.686385  649678 type.go:168] "Request Body" body=""
	I1006 14:22:49.686504  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:49.686900  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:49.713160  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:49.766183  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:49.769572  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:49.769611  649678 retry.go:31] will retry after 48.442133458s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:50.186500  649678 type.go:168] "Request Body" body=""
	I1006 14:22:50.186594  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:50.186974  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:50.686617  649678 type.go:168] "Request Body" body=""
	I1006 14:22:50.686714  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:50.687119  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:51.186841  649678 type.go:168] "Request Body" body=""
	I1006 14:22:51.186935  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:51.187337  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:51.187405  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:51.686028  649678 type.go:168] "Request Body" body=""
	I1006 14:22:51.686127  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:51.686519  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:52.186126  649678 type.go:168] "Request Body" body=""
	I1006 14:22:52.186243  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:52.186633  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:52.686285  649678 type.go:168] "Request Body" body=""
	I1006 14:22:52.686514  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:52.686906  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:53.186666  649678 type.go:168] "Request Body" body=""
	I1006 14:22:53.186777  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:53.187137  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:53.686806  649678 type.go:168] "Request Body" body=""
	I1006 14:22:53.686890  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:53.687265  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:53.687341  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:54.186883  649678 type.go:168] "Request Body" body=""
	I1006 14:22:54.186965  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:54.187357  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:54.685948  649678 type.go:168] "Request Body" body=""
	I1006 14:22:54.686032  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:54.686415  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:55.186002  649678 type.go:168] "Request Body" body=""
	I1006 14:22:55.186183  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:55.186574  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:55.686131  649678 type.go:168] "Request Body" body=""
	I1006 14:22:55.686232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:55.686601  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:56.186156  649678 type.go:168] "Request Body" body=""
	I1006 14:22:56.186256  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:56.186593  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:56.186664  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:56.686450  649678 type.go:168] "Request Body" body=""
	I1006 14:22:56.686613  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:56.686999  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:57.186661  649678 type.go:168] "Request Body" body=""
	I1006 14:22:57.186772  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:57.187148  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:57.686783  649678 type.go:168] "Request Body" body=""
	I1006 14:22:57.686883  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:57.687277  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:58.185869  649678 type.go:168] "Request Body" body=""
	I1006 14:22:58.185950  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:58.186323  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:58.686036  649678 type.go:168] "Request Body" body=""
	I1006 14:22:58.686125  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:58.686521  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:58.686591  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:59.186311  649678 type.go:168] "Request Body" body=""
	I1006 14:22:59.186404  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:59.186765  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:59.686602  649678 type.go:168] "Request Body" body=""
	I1006 14:22:59.686706  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:59.687089  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:00.186937  649678 type.go:168] "Request Body" body=""
	I1006 14:23:00.187019  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:00.187408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:00.686157  649678 type.go:168] "Request Body" body=""
	I1006 14:23:00.686313  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:00.686729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:00.686803  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:01.186684  649678 type.go:168] "Request Body" body=""
	I1006 14:23:01.186778  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:01.187151  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:01.685976  649678 type.go:168] "Request Body" body=""
	I1006 14:23:01.686057  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:01.686478  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:02.186289  649678 type.go:168] "Request Body" body=""
	I1006 14:23:02.186377  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:02.186737  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:02.686603  649678 type.go:168] "Request Body" body=""
	I1006 14:23:02.686684  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:02.687079  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:02.687190  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:03.185994  649678 type.go:168] "Request Body" body=""
	I1006 14:23:03.186088  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:03.186549  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:03.686032  649678 type.go:168] "Request Body" body=""
	I1006 14:23:03.686132  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:03.686549  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:04.186500  649678 type.go:168] "Request Body" body=""
	I1006 14:23:04.186631  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:04.187174  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:04.685994  649678 type.go:168] "Request Body" body=""
	I1006 14:23:04.686082  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:04.686484  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:05.186312  649678 type.go:168] "Request Body" body=""
	I1006 14:23:05.186407  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:05.186774  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:05.186835  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:05.686657  649678 type.go:168] "Request Body" body=""
	I1006 14:23:05.686791  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:05.687181  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:06.186003  649678 type.go:168] "Request Body" body=""
	I1006 14:23:06.186097  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:06.186563  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:06.686413  649678 type.go:168] "Request Body" body=""
	I1006 14:23:06.686495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:06.686915  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:07.186819  649678 type.go:168] "Request Body" body=""
	I1006 14:23:07.186902  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:07.187335  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:07.187443  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:07.685990  649678 type.go:168] "Request Body" body=""
	I1006 14:23:07.686084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:07.686517  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:08.186341  649678 type.go:168] "Request Body" body=""
	I1006 14:23:08.186420  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:08.186803  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:08.686701  649678 type.go:168] "Request Body" body=""
	I1006 14:23:08.686838  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:08.687297  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:09.186086  649678 type.go:168] "Request Body" body=""
	I1006 14:23:09.186254  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:09.186682  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:09.686607  649678 type.go:168] "Request Body" body=""
	I1006 14:23:09.686743  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:09.687165  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:09.687290  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:10.185924  649678 type.go:168] "Request Body" body=""
	I1006 14:23:10.186016  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:10.186459  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:10.686243  649678 type.go:168] "Request Body" body=""
	I1006 14:23:10.686352  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:10.686735  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:11.186644  649678 type.go:168] "Request Body" body=""
	I1006 14:23:11.186726  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:11.187073  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:11.685855  649678 type.go:168] "Request Body" body=""
	I1006 14:23:11.685945  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:11.686393  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:12.186196  649678 type.go:168] "Request Body" body=""
	I1006 14:23:12.186326  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:12.186700  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:12.186777  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:12.686603  649678 type.go:168] "Request Body" body=""
	I1006 14:23:12.686687  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:12.687185  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:13.186005  649678 type.go:168] "Request Body" body=""
	I1006 14:23:13.186125  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:13.186566  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:13.686384  649678 type.go:168] "Request Body" body=""
	I1006 14:23:13.686489  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:13.686889  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:14.186755  649678 type.go:168] "Request Body" body=""
	I1006 14:23:14.186840  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:14.187235  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:14.187324  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:14.686081  649678 type.go:168] "Request Body" body=""
	I1006 14:23:14.686227  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:14.686581  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:15.186411  649678 type.go:168] "Request Body" body=""
	I1006 14:23:15.186495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:15.186873  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:15.686769  649678 type.go:168] "Request Body" body=""
	I1006 14:23:15.686854  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:15.687247  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:16.186139  649678 type.go:168] "Request Body" body=""
	I1006 14:23:16.186276  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:16.186637  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:16.686871  649678 type.go:168] "Request Body" body=""
	I1006 14:23:16.686955  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:16.687341  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:16.687407  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:17.186133  649678 type.go:168] "Request Body" body=""
	I1006 14:23:17.186292  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:17.186672  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:17.686604  649678 type.go:168] "Request Body" body=""
	I1006 14:23:17.686688  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:17.687115  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:18.185964  649678 type.go:168] "Request Body" body=""
	I1006 14:23:18.186060  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:18.186514  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:18.686315  649678 type.go:168] "Request Body" body=""
	I1006 14:23:18.686410  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:18.686801  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:19.186674  649678 type.go:168] "Request Body" body=""
	I1006 14:23:19.186783  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:19.187188  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:19.187288  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:19.686017  649678 type.go:168] "Request Body" body=""
	I1006 14:23:19.686099  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:19.686535  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:19.749802  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:23:19.804037  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:19.807440  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:19.807591  649678 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:23:20.186477  649678 type.go:168] "Request Body" body=""
	I1006 14:23:20.186600  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:20.186989  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:20.686678  649678 type.go:168] "Request Body" body=""
	I1006 14:23:20.686763  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:20.687137  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:21.186775  649678 type.go:168] "Request Body" body=""
	I1006 14:23:21.186859  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:21.187276  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:21.187355  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:21.686079  649678 type.go:168] "Request Body" body=""
	I1006 14:23:21.686193  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:21.686605  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:22.186165  649678 type.go:168] "Request Body" body=""
	I1006 14:23:22.186276  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:22.186620  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:22.686240  649678 type.go:168] "Request Body" body=""
	I1006 14:23:22.686350  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:22.686763  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:23.186337  649678 type.go:168] "Request Body" body=""
	I1006 14:23:23.186473  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:23.186847  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:23.686573  649678 type.go:168] "Request Body" body=""
	I1006 14:23:23.686658  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:23.687072  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:23.687135  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:24.186778  649678 type.go:168] "Request Body" body=""
	I1006 14:23:24.186877  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:24.187302  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:24.685913  649678 type.go:168] "Request Body" body=""
	I1006 14:23:24.686009  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:24.686431  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:25.186039  649678 type.go:168] "Request Body" body=""
	I1006 14:23:25.186195  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:25.186614  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:25.686319  649678 type.go:168] "Request Body" body=""
	I1006 14:23:25.686432  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:25.686796  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:26.186364  649678 type.go:168] "Request Body" body=""
	I1006 14:23:26.186458  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:26.186842  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:26.186906  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:26.686757  649678 type.go:168] "Request Body" body=""
	I1006 14:23:26.686843  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:26.687175  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:27.186886  649678 type.go:168] "Request Body" body=""
	I1006 14:23:27.187004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:27.187400  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:27.685970  649678 type.go:168] "Request Body" body=""
	I1006 14:23:27.686086  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:27.686508  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:28.186097  649678 type.go:168] "Request Body" body=""
	I1006 14:23:28.186253  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:28.186667  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:28.686303  649678 type.go:168] "Request Body" body=""
	I1006 14:23:28.686394  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:28.686776  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:28.686869  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:29.186361  649678 type.go:168] "Request Body" body=""
	I1006 14:23:29.186534  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:29.186921  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:29.686617  649678 type.go:168] "Request Body" body=""
	I1006 14:23:29.686706  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:29.687093  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:30.185988  649678 type.go:168] "Request Body" body=""
	I1006 14:23:30.186107  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:30.186525  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:30.686173  649678 type.go:168] "Request Body" body=""
	I1006 14:23:30.686284  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:30.686704  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:31.186306  649678 type.go:168] "Request Body" body=""
	I1006 14:23:31.186416  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:31.186796  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:31.186865  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:31.686731  649678 type.go:168] "Request Body" body=""
	I1006 14:23:31.686818  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:31.687245  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:32.185868  649678 type.go:168] "Request Body" body=""
	I1006 14:23:32.185977  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:32.186468  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:32.686055  649678 type.go:168] "Request Body" body=""
	I1006 14:23:32.686249  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:32.686637  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:33.186245  649678 type.go:168] "Request Body" body=""
	I1006 14:23:33.186380  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:33.186741  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:33.686327  649678 type.go:168] "Request Body" body=""
	I1006 14:23:33.686421  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:33.686817  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:33.686882  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:34.186428  649678 type.go:168] "Request Body" body=""
	I1006 14:23:34.186519  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:34.186900  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:34.686601  649678 type.go:168] "Request Body" body=""
	I1006 14:23:34.686693  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:34.687174  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:35.186394  649678 type.go:168] "Request Body" body=""
	I1006 14:23:35.186495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:35.186830  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:35.686544  649678 type.go:168] "Request Body" body=""
	I1006 14:23:35.686676  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:35.687151  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:35.687249  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:36.186429  649678 type.go:168] "Request Body" body=""
	I1006 14:23:36.186525  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:36.186900  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:36.686821  649678 type.go:168] "Request Body" body=""
	I1006 14:23:36.686905  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:36.687296  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:37.185937  649678 type.go:168] "Request Body" body=""
	I1006 14:23:37.186041  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:37.186463  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:37.686057  649678 type.go:168] "Request Body" body=""
	I1006 14:23:37.686134  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:37.686537  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:38.186164  649678 type.go:168] "Request Body" body=""
	I1006 14:23:38.186301  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:38.186719  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:38.186784  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:38.212898  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:23:38.268129  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:38.271217  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:38.271448  649678 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:23:38.274179  649678 out.go:179] * Enabled addons: 
	I1006 14:23:38.275265  649678 addons.go:514] duration metric: took 1m48.200610857s for enable addons: enabled=[]
	I1006 14:23:38.686820  649678 type.go:168] "Request Body" body=""
	I1006 14:23:38.686904  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:38.687336  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:39.186242  649678 type.go:168] "Request Body" body=""
	I1006 14:23:39.186340  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:39.186728  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:39.686616  649678 type.go:168] "Request Body" body=""
	I1006 14:23:39.686713  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:39.687110  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:40.185923  649678 type.go:168] "Request Body" body=""
	I1006 14:23:40.186012  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:40.186440  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:40.686260  649678 type.go:168] "Request Body" body=""
	I1006 14:23:40.686360  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:40.686781  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:40.686870  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:41.186716  649678 type.go:168] "Request Body" body=""
	I1006 14:23:41.186846  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:41.187307  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:41.686117  649678 type.go:168] "Request Body" body=""
	I1006 14:23:41.686264  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:41.686651  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:42.186500  649678 type.go:168] "Request Body" body=""
	I1006 14:23:42.186601  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:42.187000  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:42.686853  649678 type.go:168] "Request Body" body=""
	I1006 14:23:42.686932  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:42.687293  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:42.687369  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:43.186081  649678 type.go:168] "Request Body" body=""
	I1006 14:23:43.186176  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:43.186615  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:43.686377  649678 type.go:168] "Request Body" body=""
	I1006 14:23:43.686461  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:43.686807  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:44.186682  649678 type.go:168] "Request Body" body=""
	I1006 14:23:44.186789  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:44.187155  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:44.685945  649678 type.go:168] "Request Body" body=""
	I1006 14:23:44.686029  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:44.686444  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:45.186221  649678 type.go:168] "Request Body" body=""
	I1006 14:23:45.186326  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:45.186717  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:45.186786  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:45.686681  649678 type.go:168] "Request Body" body=""
	I1006 14:23:45.686763  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:45.687135  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:46.185919  649678 type.go:168] "Request Body" body=""
	I1006 14:23:46.186010  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:46.186423  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:46.686119  649678 type.go:168] "Request Body" body=""
	I1006 14:23:46.686200  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:46.686594  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:47.186343  649678 type.go:168] "Request Body" body=""
	I1006 14:23:47.186428  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:47.186751  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:47.186812  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:47.686582  649678 type.go:168] "Request Body" body=""
	I1006 14:23:47.686670  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:47.687029  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:48.186905  649678 type.go:168] "Request Body" body=""
	I1006 14:23:48.187010  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:48.187415  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:48.686173  649678 type.go:168] "Request Body" body=""
	I1006 14:23:48.686274  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:48.686614  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:49.186426  649678 type.go:168] "Request Body" body=""
	I1006 14:23:49.186559  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:49.187170  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:49.187283  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:49.686055  649678 type.go:168] "Request Body" body=""
	I1006 14:23:49.686162  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:49.686567  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:50.186460  649678 type.go:168] "Request Body" body=""
	I1006 14:23:50.186578  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:50.186980  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:50.686657  649678 type.go:168] "Request Body" body=""
	I1006 14:23:50.686737  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:50.687102  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:51.186780  649678 type.go:168] "Request Body" body=""
	I1006 14:23:51.186879  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:51.187290  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:51.187357  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:51.686055  649678 type.go:168] "Request Body" body=""
	I1006 14:23:51.686146  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:51.686562  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:52.186152  649678 type.go:168] "Request Body" body=""
	I1006 14:23:52.186274  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:52.186663  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:52.686295  649678 type.go:168] "Request Body" body=""
	I1006 14:23:52.686384  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:52.686751  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:53.186373  649678 type.go:168] "Request Body" body=""
	I1006 14:23:53.186476  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:53.186876  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:53.686514  649678 type.go:168] "Request Body" body=""
	I1006 14:23:53.686604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:53.686953  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:53.687018  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:54.186624  649678 type.go:168] "Request Body" body=""
	I1006 14:23:54.186724  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:54.187084  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:54.686709  649678 type.go:168] "Request Body" body=""
	I1006 14:23:54.686798  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:54.687167  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:55.186814  649678 type.go:168] "Request Body" body=""
	I1006 14:23:55.186908  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:55.187298  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:55.685884  649678 type.go:168] "Request Body" body=""
	I1006 14:23:55.685966  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:55.686336  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:56.185959  649678 type.go:168] "Request Body" body=""
	I1006 14:23:56.186053  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:56.186474  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:56.186543  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:56.686257  649678 type.go:168] "Request Body" body=""
	I1006 14:23:56.686340  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:56.686714  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:57.186250  649678 type.go:168] "Request Body" body=""
	I1006 14:23:57.186346  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:57.186713  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:57.686338  649678 type.go:168] "Request Body" body=""
	I1006 14:23:57.686411  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:57.686758  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:58.186346  649678 type.go:168] "Request Body" body=""
	I1006 14:23:58.186462  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:58.186853  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:58.186925  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:58.686513  649678 type.go:168] "Request Body" body=""
	I1006 14:23:58.686597  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:58.686941  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:59.186651  649678 type.go:168] "Request Body" body=""
	I1006 14:23:59.186746  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:59.187144  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:59.686847  649678 type.go:168] "Request Body" body=""
	I1006 14:23:59.686928  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:59.687299  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:00.186276  649678 type.go:168] "Request Body" body=""
	I1006 14:24:00.186374  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:00.186780  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:00.686385  649678 type.go:168] "Request Body" body=""
	I1006 14:24:00.686467  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:00.686835  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:00.686902  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:01.186504  649678 type.go:168] "Request Body" body=""
	I1006 14:24:01.186604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:01.187011  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:01.686898  649678 type.go:168] "Request Body" body=""
	I1006 14:24:01.686984  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:01.687358  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:02.185992  649678 type.go:168] "Request Body" body=""
	I1006 14:24:02.186098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:02.186510  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:02.686060  649678 type.go:168] "Request Body" body=""
	I1006 14:24:02.686175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:02.686581  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:03.186144  649678 type.go:168] "Request Body" body=""
	I1006 14:24:03.186269  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:03.186664  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:03.186735  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:03.686298  649678 type.go:168] "Request Body" body=""
	I1006 14:24:03.686391  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:03.686764  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:04.186331  649678 type.go:168] "Request Body" body=""
	I1006 14:24:04.186436  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:04.186806  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:04.686453  649678 type.go:168] "Request Body" body=""
	I1006 14:24:04.686539  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:04.686904  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:05.186584  649678 type.go:168] "Request Body" body=""
	I1006 14:24:05.186677  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:05.187042  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:05.187118  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:05.686754  649678 type.go:168] "Request Body" body=""
	I1006 14:24:05.686838  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:05.687249  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:06.186882  649678 type.go:168] "Request Body" body=""
	I1006 14:24:06.186978  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:06.187376  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:06.686265  649678 type.go:168] "Request Body" body=""
	I1006 14:24:06.686360  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:06.686739  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:07.186388  649678 type.go:168] "Request Body" body=""
	I1006 14:24:07.186485  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:07.186890  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:07.686565  649678 type.go:168] "Request Body" body=""
	I1006 14:24:07.686740  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:07.687177  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:07.687301  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:08.186834  649678 type.go:168] "Request Body" body=""
	I1006 14:24:08.186933  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:08.187338  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:08.685923  649678 type.go:168] "Request Body" body=""
	I1006 14:24:08.686004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:08.686400  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:09.185982  649678 type.go:168] "Request Body" body=""
	I1006 14:24:09.186075  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:09.186486  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:09.686071  649678 type.go:168] "Request Body" body=""
	I1006 14:24:09.686147  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:09.686609  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:10.186339  649678 type.go:168] "Request Body" body=""
	I1006 14:24:10.186435  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:10.186832  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:10.186914  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:10.686410  649678 type.go:168] "Request Body" body=""
	I1006 14:24:10.686491  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:10.686878  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:11.186499  649678 type.go:168] "Request Body" body=""
	I1006 14:24:11.186603  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:11.186987  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:11.686993  649678 type.go:168] "Request Body" body=""
	I1006 14:24:11.687075  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:11.687486  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:12.186044  649678 type.go:168] "Request Body" body=""
	I1006 14:24:12.186144  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:12.186531  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:12.686100  649678 type.go:168] "Request Body" body=""
	I1006 14:24:12.686192  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:12.686612  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:12.686688  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:13.186239  649678 type.go:168] "Request Body" body=""
	I1006 14:24:13.186332  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:13.186729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:13.686339  649678 type.go:168] "Request Body" body=""
	I1006 14:24:13.686426  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:13.686827  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:14.186505  649678 type.go:168] "Request Body" body=""
	I1006 14:24:14.186600  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:14.186972  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:14.686706  649678 type.go:168] "Request Body" body=""
	I1006 14:24:14.686793  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:14.687271  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:14.687344  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:15.186857  649678 type.go:168] "Request Body" body=""
	I1006 14:24:15.186949  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:15.187318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:15.685900  649678 type.go:168] "Request Body" body=""
	I1006 14:24:15.686005  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:15.686504  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:16.186073  649678 type.go:168] "Request Body" body=""
	I1006 14:24:16.186167  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:16.186557  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:16.686576  649678 type.go:168] "Request Body" body=""
	I1006 14:24:16.686657  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:16.687039  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:17.186833  649678 type.go:168] "Request Body" body=""
	I1006 14:24:17.186929  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:17.187333  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:17.187429  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:17.685958  649678 type.go:168] "Request Body" body=""
	I1006 14:24:17.686060  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:17.686506  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:18.186267  649678 type.go:168] "Request Body" body=""
	I1006 14:24:18.186350  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:18.186723  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:18.686325  649678 type.go:168] "Request Body" body=""
	I1006 14:24:18.686420  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:18.686789  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:19.186406  649678 type.go:168] "Request Body" body=""
	I1006 14:24:19.186488  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:19.186868  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:19.686567  649678 type.go:168] "Request Body" body=""
	I1006 14:24:19.686656  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:19.687081  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:19.687166  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:20.185993  649678 type.go:168] "Request Body" body=""
	I1006 14:24:20.186081  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:20.186515  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:20.686127  649678 type.go:168] "Request Body" body=""
	I1006 14:24:20.686261  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:20.686672  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:21.186285  649678 type.go:168] "Request Body" body=""
	I1006 14:24:21.186378  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:21.186760  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:21.686689  649678 type.go:168] "Request Body" body=""
	I1006 14:24:21.686806  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:21.687270  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:21.687343  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:22.186875  649678 type.go:168] "Request Body" body=""
	I1006 14:24:22.186957  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:22.187340  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:22.685917  649678 type.go:168] "Request Body" body=""
	I1006 14:24:22.686001  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:22.686421  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:23.186006  649678 type.go:168] "Request Body" body=""
	I1006 14:24:23.186098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:23.186524  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:23.686088  649678 type.go:168] "Request Body" body=""
	I1006 14:24:23.686169  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:23.686561  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:24.186157  649678 type.go:168] "Request Body" body=""
	I1006 14:24:24.186277  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:24.186678  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:24.186752  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:24.686265  649678 type.go:168] "Request Body" body=""
	I1006 14:24:24.686340  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:24.686724  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:25.186308  649678 type.go:168] "Request Body" body=""
	I1006 14:24:25.186403  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:25.186836  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:25.686416  649678 type.go:168] "Request Body" body=""
	I1006 14:24:25.686502  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:25.686869  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:26.186513  649678 type.go:168] "Request Body" body=""
	I1006 14:24:26.186607  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:26.186966  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:26.187036  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:26.686743  649678 type.go:168] "Request Body" body=""
	I1006 14:24:26.686828  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:26.687232  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:27.186830  649678 type.go:168] "Request Body" body=""
	I1006 14:24:27.186956  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:27.187284  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:27.685892  649678 type.go:168] "Request Body" body=""
	I1006 14:24:27.685969  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:27.686452  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:28.185994  649678 type.go:168] "Request Body" body=""
	I1006 14:24:28.186085  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:28.186516  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:28.686092  649678 type.go:168] "Request Body" body=""
	I1006 14:24:28.686226  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:28.686600  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:28.686667  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:29.186232  649678 type.go:168] "Request Body" body=""
	I1006 14:24:29.186318  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:29.186686  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:29.686272  649678 type.go:168] "Request Body" body=""
	I1006 14:24:29.686385  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:29.686803  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:30.186682  649678 type.go:168] "Request Body" body=""
	I1006 14:24:30.186770  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:30.187128  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:30.686899  649678 type.go:168] "Request Body" body=""
	I1006 14:24:30.687000  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:30.687446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:30.687521  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:31.186005  649678 type.go:168] "Request Body" body=""
	I1006 14:24:31.186092  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:31.186508  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:31.686473  649678 type.go:168] "Request Body" body=""
	I1006 14:24:31.686563  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:31.686985  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:32.186673  649678 type.go:168] "Request Body" body=""
	I1006 14:24:32.186756  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:32.187112  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:32.686831  649678 type.go:168] "Request Body" body=""
	I1006 14:24:32.686918  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:32.687304  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:33.185919  649678 type.go:168] "Request Body" body=""
	I1006 14:24:33.186004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:33.186403  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:33.186477  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:33.685961  649678 type.go:168] "Request Body" body=""
	I1006 14:24:33.686072  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:33.686452  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:34.186033  649678 type.go:168] "Request Body" body=""
	I1006 14:24:34.186116  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:34.186521  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:34.686098  649678 type.go:168] "Request Body" body=""
	I1006 14:24:34.686233  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:34.686619  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:35.186193  649678 type.go:168] "Request Body" body=""
	I1006 14:24:35.186290  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:35.186663  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:35.186737  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:35.686293  649678 type.go:168] "Request Body" body=""
	I1006 14:24:35.686406  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:35.686798  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:36.186344  649678 type.go:168] "Request Body" body=""
	I1006 14:24:36.186419  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:36.186746  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:36.686564  649678 type.go:168] "Request Body" body=""
	I1006 14:24:36.686654  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:36.687044  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:37.186671  649678 type.go:168] "Request Body" body=""
	I1006 14:24:37.186749  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:37.187101  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:37.187190  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:37.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:24:37.686844  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:37.687282  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:38.186015  649678 type.go:168] "Request Body" body=""
	I1006 14:24:38.186100  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:38.186512  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:38.686083  649678 type.go:168] "Request Body" body=""
	I1006 14:24:38.686160  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:38.686534  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:39.186147  649678 type.go:168] "Request Body" body=""
	I1006 14:24:39.186264  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:39.186629  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:39.686351  649678 type.go:168] "Request Body" body=""
	I1006 14:24:39.686445  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:39.686834  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:39.686903  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:40.186723  649678 type.go:168] "Request Body" body=""
	I1006 14:24:40.186824  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:40.187257  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:40.686897  649678 type.go:168] "Request Body" body=""
	I1006 14:24:40.686983  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:40.687415  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:41.186000  649678 type.go:168] "Request Body" body=""
	I1006 14:24:41.186080  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:41.186497  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:41.686311  649678 type.go:168] "Request Body" body=""
	I1006 14:24:41.686398  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:41.686747  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:42.186394  649678 type.go:168] "Request Body" body=""
	I1006 14:24:42.186477  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:42.186829  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:42.186909  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:42.686365  649678 type.go:168] "Request Body" body=""
	I1006 14:24:42.686458  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:42.686828  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:43.186364  649678 type.go:168] "Request Body" body=""
	I1006 14:24:43.186453  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:43.186835  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:43.686404  649678 type.go:168] "Request Body" body=""
	I1006 14:24:43.686479  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:43.686829  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:44.186419  649678 type.go:168] "Request Body" body=""
	I1006 14:24:44.186497  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:44.186840  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:44.686503  649678 type.go:168] "Request Body" body=""
	I1006 14:24:44.686579  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:44.686908  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:44.686976  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:45.186546  649678 type.go:168] "Request Body" body=""
	I1006 14:24:45.186633  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:45.186973  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:45.686633  649678 type.go:168] "Request Body" body=""
	I1006 14:24:45.686722  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:45.687066  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:46.186715  649678 type.go:168] "Request Body" body=""
	I1006 14:24:46.186798  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:46.187164  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:46.686921  649678 type.go:168] "Request Body" body=""
	I1006 14:24:46.687008  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:46.687441  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:46.687511  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:47.186093  649678 type.go:168] "Request Body" body=""
	I1006 14:24:47.186175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:47.186548  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:47.686128  649678 type.go:168] "Request Body" body=""
	I1006 14:24:47.686233  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:47.686613  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:48.186260  649678 type.go:168] "Request Body" body=""
	I1006 14:24:48.186345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:48.186715  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:48.686317  649678 type.go:168] "Request Body" body=""
	I1006 14:24:48.686416  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:48.686787  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:49.186383  649678 type.go:168] "Request Body" body=""
	I1006 14:24:49.186483  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:49.186862  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:49.186934  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:49.686547  649678 type.go:168] "Request Body" body=""
	I1006 14:24:49.686630  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:49.687018  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:50.186932  649678 type.go:168] "Request Body" body=""
	I1006 14:24:50.187020  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:50.187392  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:50.685995  649678 type.go:168] "Request Body" body=""
	I1006 14:24:50.686087  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:50.686639  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:51.186241  649678 type.go:168] "Request Body" body=""
	I1006 14:24:51.186321  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:51.186677  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:51.686524  649678 type.go:168] "Request Body" body=""
	I1006 14:24:51.686604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:51.686971  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:51.687045  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:52.186636  649678 type.go:168] "Request Body" body=""
	I1006 14:24:52.186724  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:52.187108  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:52.686753  649678 type.go:168] "Request Body" body=""
	I1006 14:24:52.686831  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:52.687267  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:53.185896  649678 type.go:168] "Request Body" body=""
	I1006 14:24:53.185979  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:53.186366  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:53.685914  649678 type.go:168] "Request Body" body=""
	I1006 14:24:53.685990  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:53.686334  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:54.185922  649678 type.go:168] "Request Body" body=""
	I1006 14:24:54.186002  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:54.186408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:54.186489  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:54.685967  649678 type.go:168] "Request Body" body=""
	I1006 14:24:54.686051  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:54.686451  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:55.186040  649678 type.go:168] "Request Body" body=""
	I1006 14:24:55.186122  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:55.186477  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:55.686036  649678 type.go:168] "Request Body" body=""
	I1006 14:24:55.686113  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:55.686480  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:56.186026  649678 type.go:168] "Request Body" body=""
	I1006 14:24:56.186104  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:56.186478  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:56.186550  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:56.686248  649678 type.go:168] "Request Body" body=""
	I1006 14:24:56.686329  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:56.686693  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:57.186234  649678 type.go:168] "Request Body" body=""
	I1006 14:24:57.186315  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:57.186630  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:57.686283  649678 type.go:168] "Request Body" body=""
	I1006 14:24:57.686402  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:57.686814  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:58.186365  649678 type.go:168] "Request Body" body=""
	I1006 14:24:58.186450  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:58.186794  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:58.186858  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:58.686485  649678 type.go:168] "Request Body" body=""
	I1006 14:24:58.686625  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:58.687000  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:59.186645  649678 type.go:168] "Request Body" body=""
	I1006 14:24:59.186728  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:59.187067  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:59.686701  649678 type.go:168] "Request Body" body=""
	I1006 14:24:59.686778  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:59.687158  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:00.185971  649678 type.go:168] "Request Body" body=""
	I1006 14:25:00.186051  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:00.186405  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:00.686037  649678 type.go:168] "Request Body" body=""
	I1006 14:25:00.686117  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:00.686528  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:00.686606  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:01.186098  649678 type.go:168] "Request Body" body=""
	I1006 14:25:01.186186  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:01.186639  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:01.686574  649678 type.go:168] "Request Body" body=""
	I1006 14:25:01.686664  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:01.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:02.186731  649678 type.go:168] "Request Body" body=""
	I1006 14:25:02.186819  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:02.187259  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:02.685880  649678 type.go:168] "Request Body" body=""
	I1006 14:25:02.685972  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:02.686460  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:03.186037  649678 type.go:168] "Request Body" body=""
	I1006 14:25:03.186117  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:03.186526  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:03.186595  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:03.686186  649678 type.go:168] "Request Body" body=""
	I1006 14:25:03.686282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:03.686638  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:04.186251  649678 type.go:168] "Request Body" body=""
	I1006 14:25:04.186325  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:04.186672  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:04.686261  649678 type.go:168] "Request Body" body=""
	I1006 14:25:04.686346  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:04.686697  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:05.186293  649678 type.go:168] "Request Body" body=""
	I1006 14:25:05.186374  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:05.186780  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:05.186857  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:05.686332  649678 type.go:168] "Request Body" body=""
	I1006 14:25:05.686416  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:05.686772  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:06.186370  649678 type.go:168] "Request Body" body=""
	I1006 14:25:06.186449  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:06.186819  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:06.686670  649678 type.go:168] "Request Body" body=""
	I1006 14:25:06.686749  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:06.687114  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:07.186765  649678 type.go:168] "Request Body" body=""
	I1006 14:25:07.186854  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:07.187255  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:07.187328  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:07.686866  649678 type.go:168] "Request Body" body=""
	I1006 14:25:07.686945  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:07.687337  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:08.185991  649678 type.go:168] "Request Body" body=""
	I1006 14:25:08.186073  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:08.186473  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:08.686026  649678 type.go:168] "Request Body" body=""
	I1006 14:25:08.686101  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:08.686467  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:09.186027  649678 type.go:168] "Request Body" body=""
	I1006 14:25:09.186117  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:09.186491  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:09.686131  649678 type.go:168] "Request Body" body=""
	I1006 14:25:09.686218  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:09.686554  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:09.686624  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:10.186421  649678 type.go:168] "Request Body" body=""
	I1006 14:25:10.186509  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:10.186885  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:10.686589  649678 type.go:168] "Request Body" body=""
	I1006 14:25:10.686673  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:10.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:11.186451  649678 type.go:168] "Request Body" body=""
	I1006 14:25:11.186534  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:11.186908  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:11.686874  649678 type.go:168] "Request Body" body=""
	I1006 14:25:11.686958  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:11.687404  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:11.687478  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:12.186004  649678 type.go:168] "Request Body" body=""
	I1006 14:25:12.186089  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:12.186488  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:12.686071  649678 type.go:168] "Request Body" body=""
	I1006 14:25:12.686175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:12.686583  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:13.186311  649678 type.go:168] "Request Body" body=""
	I1006 14:25:13.186394  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:13.186794  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:13.686469  649678 type.go:168] "Request Body" body=""
	I1006 14:25:13.686560  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:13.686955  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:14.186674  649678 type.go:168] "Request Body" body=""
	I1006 14:25:14.186764  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:14.187198  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:14.187305  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:14.686830  649678 type.go:168] "Request Body" body=""
	I1006 14:25:14.686915  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:14.687318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:15.185883  649678 type.go:168] "Request Body" body=""
	I1006 14:25:15.185963  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:15.186381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:15.685988  649678 type.go:168] "Request Body" body=""
	I1006 14:25:15.686075  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:15.686471  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:16.186057  649678 type.go:168] "Request Body" body=""
	I1006 14:25:16.186159  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:16.186628  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:16.686506  649678 type.go:168] "Request Body" body=""
	I1006 14:25:16.686586  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:16.686922  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:16.686991  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:17.186686  649678 type.go:168] "Request Body" body=""
	I1006 14:25:17.186779  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:17.187190  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:17.686871  649678 type.go:168] "Request Body" body=""
	I1006 14:25:17.686958  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:17.687378  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:18.185930  649678 type.go:168] "Request Body" body=""
	I1006 14:25:18.186011  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:18.186362  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:18.686006  649678 type.go:168] "Request Body" body=""
	I1006 14:25:18.686091  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:18.686522  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:19.186154  649678 type.go:168] "Request Body" body=""
	I1006 14:25:19.186270  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:19.186661  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:19.186738  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:19.686272  649678 type.go:168] "Request Body" body=""
	I1006 14:25:19.686357  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:19.686722  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:20.186620  649678 type.go:168] "Request Body" body=""
	I1006 14:25:20.186712  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:20.187085  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:20.686732  649678 type.go:168] "Request Body" body=""
	I1006 14:25:20.686813  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:20.687200  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:21.186886  649678 type.go:168] "Request Body" body=""
	I1006 14:25:21.186971  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:21.187421  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:21.187498  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:21.686192  649678 type.go:168] "Request Body" body=""
	I1006 14:25:21.686313  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:21.686703  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:22.186337  649678 type.go:168] "Request Body" body=""
	I1006 14:25:22.186443  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:22.186816  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:22.686392  649678 type.go:168] "Request Body" body=""
	I1006 14:25:22.686470  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:22.686872  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:23.186538  649678 type.go:168] "Request Body" body=""
	I1006 14:25:23.186623  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:23.186990  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:23.686645  649678 type.go:168] "Request Body" body=""
	I1006 14:25:23.686745  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:23.687147  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:23.687255  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:24.186838  649678 type.go:168] "Request Body" body=""
	I1006 14:25:24.186917  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:24.187309  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:24.685862  649678 type.go:168] "Request Body" body=""
	I1006 14:25:24.685944  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:24.686370  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:25.185903  649678 type.go:168] "Request Body" body=""
	I1006 14:25:25.185979  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:25.186373  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:25.685951  649678 type.go:168] "Request Body" body=""
	I1006 14:25:25.686032  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:25.686450  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:26.186018  649678 type.go:168] "Request Body" body=""
	I1006 14:25:26.186098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:26.186497  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:26.186566  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:26.686293  649678 type.go:168] "Request Body" body=""
	I1006 14:25:26.686378  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:26.686746  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:27.186364  649678 type.go:168] "Request Body" body=""
	I1006 14:25:27.186454  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:27.186827  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:27.686418  649678 type.go:168] "Request Body" body=""
	I1006 14:25:27.686503  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:27.686844  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:28.186581  649678 type.go:168] "Request Body" body=""
	I1006 14:25:28.186676  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:28.187085  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:28.187196  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:28.686665  649678 type.go:168] "Request Body" body=""
	I1006 14:25:28.686737  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:28.687051  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:29.186712  649678 type.go:168] "Request Body" body=""
	I1006 14:25:29.186801  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:29.187161  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:29.685861  649678 type.go:168] "Request Body" body=""
	I1006 14:25:29.685951  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:29.686323  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:30.186241  649678 type.go:168] "Request Body" body=""
	I1006 14:25:30.186336  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:30.186725  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:30.686347  649678 type.go:168] "Request Body" body=""
	I1006 14:25:30.686438  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:30.686799  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:30.686867  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:31.186356  649678 type.go:168] "Request Body" body=""
	I1006 14:25:31.186436  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:31.186790  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:31.686720  649678 type.go:168] "Request Body" body=""
	I1006 14:25:31.686801  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:31.687239  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:32.186431  649678 type.go:168] "Request Body" body=""
	I1006 14:25:32.186515  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:32.186873  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:32.686520  649678 type.go:168] "Request Body" body=""
	I1006 14:25:32.686601  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:32.686977  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:32.687047  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:33.186626  649678 type.go:168] "Request Body" body=""
	I1006 14:25:33.186710  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:33.187075  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:33.686716  649678 type.go:168] "Request Body" body=""
	I1006 14:25:33.686805  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:33.687167  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:34.186823  649678 type.go:168] "Request Body" body=""
	I1006 14:25:34.186903  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:34.187273  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:34.685846  649678 type.go:168] "Request Body" body=""
	I1006 14:25:34.685928  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:34.686316  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:35.185913  649678 type.go:168] "Request Body" body=""
	I1006 14:25:35.186011  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:35.186468  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:35.186536  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:35.686056  649678 type.go:168] "Request Body" body=""
	I1006 14:25:35.686142  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:35.686600  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:36.186122  649678 type.go:168] "Request Body" body=""
	I1006 14:25:36.186200  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:36.186601  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:36.686430  649678 type.go:168] "Request Body" body=""
	I1006 14:25:36.686510  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:36.686854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:37.186453  649678 type.go:168] "Request Body" body=""
	I1006 14:25:37.186544  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:37.186881  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:37.186946  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:37.686555  649678 type.go:168] "Request Body" body=""
	I1006 14:25:37.686635  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:37.686983  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:38.186591  649678 type.go:168] "Request Body" body=""
	I1006 14:25:38.186672  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:38.187012  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:38.686677  649678 type.go:168] "Request Body" body=""
	I1006 14:25:38.686752  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:38.687074  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:39.186406  649678 type.go:168] "Request Body" body=""
	I1006 14:25:39.186486  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:39.186779  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:39.686380  649678 type.go:168] "Request Body" body=""
	I1006 14:25:39.686456  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:39.686788  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:39.686849  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:40.186552  649678 type.go:168] "Request Body" body=""
	I1006 14:25:40.186636  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:40.186983  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:40.686686  649678 type.go:168] "Request Body" body=""
	I1006 14:25:40.686772  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:40.687136  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:41.186786  649678 type.go:168] "Request Body" body=""
	I1006 14:25:41.186883  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:41.187296  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:41.686115  649678 type.go:168] "Request Body" body=""
	I1006 14:25:41.686197  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:41.686611  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:42.186247  649678 type.go:168] "Request Body" body=""
	I1006 14:25:42.186349  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:42.186752  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:42.186818  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:42.686348  649678 type.go:168] "Request Body" body=""
	I1006 14:25:42.686429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:42.686809  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:43.186383  649678 type.go:168] "Request Body" body=""
	I1006 14:25:43.186476  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:43.186825  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:43.686373  649678 type.go:168] "Request Body" body=""
	I1006 14:25:43.686447  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:43.686785  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:44.186380  649678 type.go:168] "Request Body" body=""
	I1006 14:25:44.186471  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:44.186817  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:44.186878  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:44.686508  649678 type.go:168] "Request Body" body=""
	I1006 14:25:44.686586  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:44.686949  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:45.186631  649678 type.go:168] "Request Body" body=""
	I1006 14:25:45.186709  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:45.187070  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:45.686683  649678 type.go:168] "Request Body" body=""
	I1006 14:25:45.686760  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:45.687117  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:46.186771  649678 type.go:168] "Request Body" body=""
	I1006 14:25:46.186856  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:46.187161  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:46.187239  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:46.685960  649678 type.go:168] "Request Body" body=""
	I1006 14:25:46.686053  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:46.686491  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:47.186117  649678 type.go:168] "Request Body" body=""
	I1006 14:25:47.186232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:47.186563  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:47.686262  649678 type.go:168] "Request Body" body=""
	I1006 14:25:47.686353  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:47.686735  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:48.186344  649678 type.go:168] "Request Body" body=""
	I1006 14:25:48.186436  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:48.186775  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:48.686380  649678 type.go:168] "Request Body" body=""
	I1006 14:25:48.686477  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:48.686837  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:48.686901  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:49.186520  649678 type.go:168] "Request Body" body=""
	I1006 14:25:49.186599  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:49.186960  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:49.686576  649678 type.go:168] "Request Body" body=""
	I1006 14:25:49.686696  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:49.687078  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:50.186881  649678 type.go:168] "Request Body" body=""
	I1006 14:25:50.186973  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:50.187437  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:50.685990  649678 type.go:168] "Request Body" body=""
	I1006 14:25:50.686074  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:50.686473  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:51.186300  649678 type.go:168] "Request Body" body=""
	I1006 14:25:51.186379  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:51.186743  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:51.186811  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:51.686703  649678 type.go:168] "Request Body" body=""
	I1006 14:25:51.686798  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:51.687173  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:52.186898  649678 type.go:168] "Request Body" body=""
	I1006 14:25:52.186995  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:52.187412  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:52.686051  649678 type.go:168] "Request Body" body=""
	I1006 14:25:52.686131  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:52.686542  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:53.186148  649678 type.go:168] "Request Body" body=""
	I1006 14:25:53.186271  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:53.186618  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:53.686257  649678 type.go:168] "Request Body" body=""
	I1006 14:25:53.686333  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:53.686629  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:53.686692  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:54.186270  649678 type.go:168] "Request Body" body=""
	I1006 14:25:54.186349  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:54.186708  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:54.686271  649678 type.go:168] "Request Body" body=""
	I1006 14:25:54.686350  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:54.686763  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:55.186342  649678 type.go:168] "Request Body" body=""
	I1006 14:25:55.186429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:55.186784  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:55.686364  649678 type.go:168] "Request Body" body=""
	I1006 14:25:55.686460  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:55.686892  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:55.686972  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:56.186543  649678 type.go:168] "Request Body" body=""
	I1006 14:25:56.186621  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:56.186957  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:56.686715  649678 type.go:168] "Request Body" body=""
	I1006 14:25:56.686790  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:56.687141  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:57.186851  649678 type.go:168] "Request Body" body=""
	I1006 14:25:57.186936  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:57.187306  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:57.686906  649678 type.go:168] "Request Body" body=""
	I1006 14:25:57.686983  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:57.687342  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:57.687412  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:58.185932  649678 type.go:168] "Request Body" body=""
	I1006 14:25:58.186017  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:58.186400  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:58.685929  649678 type.go:168] "Request Body" body=""
	I1006 14:25:58.686006  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:58.686337  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:59.185922  649678 type.go:168] "Request Body" body=""
	I1006 14:25:59.186001  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:59.186386  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:59.685924  649678 type.go:168] "Request Body" body=""
	I1006 14:25:59.686004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:59.686375  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:00.186296  649678 type.go:168] "Request Body" body=""
	I1006 14:26:00.186378  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:00.186687  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:00.186765  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:00.686277  649678 type.go:168] "Request Body" body=""
	I1006 14:26:00.686360  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:00.686729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:01.186343  649678 type.go:168] "Request Body" body=""
	I1006 14:26:01.186429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:01.186799  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:01.686640  649678 type.go:168] "Request Body" body=""
	I1006 14:26:01.686731  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:01.687113  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:02.186812  649678 type.go:168] "Request Body" body=""
	I1006 14:26:02.186901  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:02.187298  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:02.187363  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:02.686912  649678 type.go:168] "Request Body" body=""
	I1006 14:26:02.686991  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:02.687387  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:03.186002  649678 type.go:168] "Request Body" body=""
	I1006 14:26:03.186084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:03.186473  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:03.685977  649678 type.go:168] "Request Body" body=""
	I1006 14:26:03.686048  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:03.686381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:04.185981  649678 type.go:168] "Request Body" body=""
	I1006 14:26:04.186057  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:04.186423  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:04.685971  649678 type.go:168] "Request Body" body=""
	I1006 14:26:04.686060  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:04.686445  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:04.686508  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:05.186070  649678 type.go:168] "Request Body" body=""
	I1006 14:26:05.186157  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:05.186570  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:05.686148  649678 type.go:168] "Request Body" body=""
	I1006 14:26:05.686264  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:05.686629  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:06.186273  649678 type.go:168] "Request Body" body=""
	I1006 14:26:06.186358  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:06.186714  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:06.686539  649678 type.go:168] "Request Body" body=""
	I1006 14:26:06.686626  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:06.686991  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:06.687057  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:07.186691  649678 type.go:168] "Request Body" body=""
	I1006 14:26:07.186766  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:07.187071  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:07.686715  649678 type.go:168] "Request Body" body=""
	I1006 14:26:07.686797  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:07.687168  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:08.186877  649678 type.go:168] "Request Body" body=""
	I1006 14:26:08.186969  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:08.187376  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:08.685874  649678 type.go:168] "Request Body" body=""
	I1006 14:26:08.685947  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:08.686343  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:09.185901  649678 type.go:168] "Request Body" body=""
	I1006 14:26:09.185986  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:09.186361  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:09.186422  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:09.685934  649678 type.go:168] "Request Body" body=""
	I1006 14:26:09.686008  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:09.686381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:10.186337  649678 type.go:168] "Request Body" body=""
	I1006 14:26:10.186418  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:10.186799  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:10.686458  649678 type.go:168] "Request Body" body=""
	I1006 14:26:10.686543  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:10.686962  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:11.186624  649678 type.go:168] "Request Body" body=""
	I1006 14:26:11.186717  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:11.187101  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:11.187175  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:11.685850  649678 type.go:168] "Request Body" body=""
	I1006 14:26:11.685927  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:11.686323  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:12.185918  649678 type.go:168] "Request Body" body=""
	I1006 14:26:12.185998  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:12.186408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:12.686005  649678 type.go:168] "Request Body" body=""
	I1006 14:26:12.686089  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:12.686517  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:13.186107  649678 type.go:168] "Request Body" body=""
	I1006 14:26:13.186230  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:13.186588  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:13.686197  649678 type.go:168] "Request Body" body=""
	I1006 14:26:13.686355  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:13.686711  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:13.686772  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:14.186309  649678 type.go:168] "Request Body" body=""
	I1006 14:26:14.186392  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:14.186749  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:14.686366  649678 type.go:168] "Request Body" body=""
	I1006 14:26:14.686450  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:14.686778  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:15.185991  649678 type.go:168] "Request Body" body=""
	I1006 14:26:15.186103  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:15.186529  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:15.686135  649678 type.go:168] "Request Body" body=""
	I1006 14:26:15.686243  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:15.686610  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:16.186323  649678 type.go:168] "Request Body" body=""
	I1006 14:26:16.186429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:16.186768  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:16.186838  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:16.686609  649678 type.go:168] "Request Body" body=""
	I1006 14:26:16.686694  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:16.687041  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:17.186702  649678 type.go:168] "Request Body" body=""
	I1006 14:26:17.186792  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:17.187231  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:17.686866  649678 type.go:168] "Request Body" body=""
	I1006 14:26:17.686950  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:17.687324  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:18.185952  649678 type.go:168] "Request Body" body=""
	I1006 14:26:18.186030  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:18.186428  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:18.685978  649678 type.go:168] "Request Body" body=""
	I1006 14:26:18.686051  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:18.686440  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:18.686507  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:19.186006  649678 type.go:168] "Request Body" body=""
	I1006 14:26:19.186087  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:19.186501  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:19.686063  649678 type.go:168] "Request Body" body=""
	I1006 14:26:19.686139  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:19.686531  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:20.186356  649678 type.go:168] "Request Body" body=""
	I1006 14:26:20.186443  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:20.186802  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:20.686408  649678 type.go:168] "Request Body" body=""
	I1006 14:26:20.686495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:20.686850  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:20.686922  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:21.186511  649678 type.go:168] "Request Body" body=""
	I1006 14:26:21.186587  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:21.186942  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:21.686813  649678 type.go:168] "Request Body" body=""
	I1006 14:26:21.686900  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:21.687313  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:22.185849  649678 type.go:168] "Request Body" body=""
	I1006 14:26:22.185931  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:22.186339  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:22.685929  649678 type.go:168] "Request Body" body=""
	I1006 14:26:22.686007  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:22.686413  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:23.186016  649678 type.go:168] "Request Body" body=""
	I1006 14:26:23.186102  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:23.186494  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:23.186565  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:23.686035  649678 type.go:168] "Request Body" body=""
	I1006 14:26:23.686107  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:23.686482  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:24.186086  649678 type.go:168] "Request Body" body=""
	I1006 14:26:24.186175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:24.186554  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:24.686126  649678 type.go:168] "Request Body" body=""
	I1006 14:26:24.686237  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:24.686577  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:25.186280  649678 type.go:168] "Request Body" body=""
	I1006 14:26:25.186363  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:25.186729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:25.186793  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:25.686357  649678 type.go:168] "Request Body" body=""
	I1006 14:26:25.686450  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:25.686832  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:26.186509  649678 type.go:168] "Request Body" body=""
	I1006 14:26:26.186599  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:26.186933  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:26.686731  649678 type.go:168] "Request Body" body=""
	I1006 14:26:26.686807  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:26.687178  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:27.186830  649678 type.go:168] "Request Body" body=""
	I1006 14:26:27.186916  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:27.187303  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:27.187367  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:27.685989  649678 type.go:168] "Request Body" body=""
	I1006 14:26:27.686079  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:27.686515  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:28.186104  649678 type.go:168] "Request Body" body=""
	I1006 14:26:28.186234  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:28.186665  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:28.686340  649678 type.go:168] "Request Body" body=""
	I1006 14:26:28.686435  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:28.686828  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:29.186495  649678 type.go:168] "Request Body" body=""
	I1006 14:26:29.186583  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:29.186957  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:29.686668  649678 type.go:168] "Request Body" body=""
	I1006 14:26:29.686747  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:29.687084  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:29.687155  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:30.185982  649678 type.go:168] "Request Body" body=""
	I1006 14:26:30.186084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:30.186533  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:30.686149  649678 type.go:168] "Request Body" body=""
	I1006 14:26:30.686258  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:30.686621  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:31.186197  649678 type.go:168] "Request Body" body=""
	I1006 14:26:31.186328  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:31.186681  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:31.686544  649678 type.go:168] "Request Body" body=""
	I1006 14:26:31.686625  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:31.687002  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:32.186625  649678 type.go:168] "Request Body" body=""
	I1006 14:26:32.186728  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:32.187110  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:32.187243  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:32.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:26:32.686849  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:32.687250  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:33.185866  649678 type.go:168] "Request Body" body=""
	I1006 14:26:33.185966  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:33.186401  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:33.685998  649678 type.go:168] "Request Body" body=""
	I1006 14:26:33.686076  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:33.686491  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:34.186036  649678 type.go:168] "Request Body" body=""
	I1006 14:26:34.186137  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:34.186537  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:34.686069  649678 type.go:168] "Request Body" body=""
	I1006 14:26:34.686144  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:34.686500  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:34.686564  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:35.186170  649678 type.go:168] "Request Body" body=""
	I1006 14:26:35.186296  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:35.186675  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:35.686291  649678 type.go:168] "Request Body" body=""
	I1006 14:26:35.686375  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:35.686758  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:36.186396  649678 type.go:168] "Request Body" body=""
	I1006 14:26:36.186499  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:36.186883  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:36.686651  649678 type.go:168] "Request Body" body=""
	I1006 14:26:36.686732  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:36.687079  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:36.687145  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:37.186756  649678 type.go:168] "Request Body" body=""
	I1006 14:26:37.186868  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:37.187300  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:37.685900  649678 type.go:168] "Request Body" body=""
	I1006 14:26:37.686015  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:37.686475  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:38.186110  649678 type.go:168] "Request Body" body=""
	I1006 14:26:38.186226  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:38.186598  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:38.686176  649678 type.go:168] "Request Body" body=""
	I1006 14:26:38.686303  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:38.686658  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:39.186240  649678 type.go:168] "Request Body" body=""
	I1006 14:26:39.186320  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:39.186682  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:39.186749  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:39.686298  649678 type.go:168] "Request Body" body=""
	I1006 14:26:39.686387  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:39.686746  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:40.186587  649678 type.go:168] "Request Body" body=""
	I1006 14:26:40.186667  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:40.187038  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:40.686696  649678 type.go:168] "Request Body" body=""
	I1006 14:26:40.686801  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:40.687169  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:41.186829  649678 type.go:168] "Request Body" body=""
	I1006 14:26:41.186908  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:41.187312  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:41.187383  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:41.686029  649678 type.go:168] "Request Body" body=""
	I1006 14:26:41.686108  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:41.686522  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:42.186071  649678 type.go:168] "Request Body" body=""
	I1006 14:26:42.186168  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:42.186549  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:42.686104  649678 type.go:168] "Request Body" body=""
	I1006 14:26:42.686190  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:42.686575  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:43.186140  649678 type.go:168] "Request Body" body=""
	I1006 14:26:43.186255  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:43.186605  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:43.686244  649678 type.go:168] "Request Body" body=""
	I1006 14:26:43.686321  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:43.686657  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:43.686731  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:44.186303  649678 type.go:168] "Request Body" body=""
	I1006 14:26:44.186390  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:44.186758  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:44.686323  649678 type.go:168] "Request Body" body=""
	I1006 14:26:44.686402  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:44.686737  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:45.186332  649678 type.go:168] "Request Body" body=""
	I1006 14:26:45.186410  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:45.186776  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:45.686331  649678 type.go:168] "Request Body" body=""
	I1006 14:26:45.686415  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:45.686779  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:45.686856  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:46.186339  649678 type.go:168] "Request Body" body=""
	I1006 14:26:46.186430  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:46.186785  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:46.686621  649678 type.go:168] "Request Body" body=""
	I1006 14:26:46.686715  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:46.687061  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:47.186713  649678 type.go:168] "Request Body" body=""
	I1006 14:26:47.186815  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:47.187185  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:47.686868  649678 type.go:168] "Request Body" body=""
	I1006 14:26:47.686957  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:47.687305  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:47.687372  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:48.185956  649678 type.go:168] "Request Body" body=""
	I1006 14:26:48.186058  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:48.186446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:48.686113  649678 type.go:168] "Request Body" body=""
	I1006 14:26:48.686236  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:48.686589  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:49.186156  649678 type.go:168] "Request Body" body=""
	I1006 14:26:49.186290  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:49.186679  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:49.686186  649678 type.go:168] "Request Body" body=""
	I1006 14:26:49.686282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:49.686588  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:50.186404  649678 type.go:168] "Request Body" body=""
	I1006 14:26:50.186506  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:50.186917  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:50.186990  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:50.686607  649678 type.go:168] "Request Body" body=""
	I1006 14:26:50.686695  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:50.687128  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:51.186788  649678 type.go:168] "Request Body" body=""
	I1006 14:26:51.186968  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:51.187381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:51.686169  649678 type.go:168] "Request Body" body=""
	I1006 14:26:51.686282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:51.686666  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:52.186376  649678 type.go:168] "Request Body" body=""
	I1006 14:26:52.186493  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:52.186854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:52.686550  649678 type.go:168] "Request Body" body=""
	I1006 14:26:52.686631  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:52.686915  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:52.686968  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:53.186633  649678 type.go:168] "Request Body" body=""
	I1006 14:26:53.186732  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:53.187095  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:53.686774  649678 type.go:168] "Request Body" body=""
	I1006 14:26:53.686871  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:53.687310  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:54.185884  649678 type.go:168] "Request Body" body=""
	I1006 14:26:54.185972  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:54.186391  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:54.685933  649678 type.go:168] "Request Body" body=""
	I1006 14:26:54.686006  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:54.686391  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:55.186064  649678 type.go:168] "Request Body" body=""
	I1006 14:26:55.186180  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:55.186574  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:55.186642  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:55.686159  649678 type.go:168] "Request Body" body=""
	I1006 14:26:55.686263  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:55.686668  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:56.186304  649678 type.go:168] "Request Body" body=""
	I1006 14:26:56.186418  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:56.186815  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:56.686705  649678 type.go:168] "Request Body" body=""
	I1006 14:26:56.686789  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:56.687169  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:57.186778  649678 type.go:168] "Request Body" body=""
	I1006 14:26:57.186869  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:57.187240  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:57.187304  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:57.685924  649678 type.go:168] "Request Body" body=""
	I1006 14:26:57.686000  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:57.686362  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:58.185951  649678 type.go:168] "Request Body" body=""
	I1006 14:26:58.186045  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:58.186445  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:58.685995  649678 type.go:168] "Request Body" body=""
	I1006 14:26:58.686071  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:58.686437  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:59.186003  649678 type.go:168] "Request Body" body=""
	I1006 14:26:59.186190  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:59.186571  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:59.686153  649678 type.go:168] "Request Body" body=""
	I1006 14:26:59.686257  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:59.686662  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:59.686725  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:00.186605  649678 type.go:168] "Request Body" body=""
	I1006 14:27:00.186714  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:00.187091  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:00.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:27:00.686859  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:00.687243  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:01.186928  649678 type.go:168] "Request Body" body=""
	I1006 14:27:01.187012  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:01.187398  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:01.686308  649678 type.go:168] "Request Body" body=""
	I1006 14:27:01.686391  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:01.686761  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:01.686839  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:02.186358  649678 type.go:168] "Request Body" body=""
	I1006 14:27:02.186439  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:02.186809  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:02.686423  649678 type.go:168] "Request Body" body=""
	I1006 14:27:02.686509  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:02.686907  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:03.186590  649678 type.go:168] "Request Body" body=""
	I1006 14:27:03.186676  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:03.187035  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:03.686678  649678 type.go:168] "Request Body" body=""
	I1006 14:27:03.686764  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:03.687130  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:03.687245  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:04.186807  649678 type.go:168] "Request Body" body=""
	I1006 14:27:04.186891  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:04.187266  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:04.686913  649678 type.go:168] "Request Body" body=""
	I1006 14:27:04.686987  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:04.687327  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:05.185951  649678 type.go:168] "Request Body" body=""
	I1006 14:27:05.186036  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:05.186442  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:05.685992  649678 type.go:168] "Request Body" body=""
	I1006 14:27:05.686068  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:05.686436  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:06.186013  649678 type.go:168] "Request Body" body=""
	I1006 14:27:06.186094  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:06.186496  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:06.186569  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:06.686265  649678 type.go:168] "Request Body" body=""
	I1006 14:27:06.686367  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:06.686740  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:07.186336  649678 type.go:168] "Request Body" body=""
	I1006 14:27:07.186417  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:07.186760  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:07.686331  649678 type.go:168] "Request Body" body=""
	I1006 14:27:07.686437  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:07.686806  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:08.186436  649678 type.go:168] "Request Body" body=""
	I1006 14:27:08.186520  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:08.186903  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:08.186969  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:08.686610  649678 type.go:168] "Request Body" body=""
	I1006 14:27:08.686699  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:08.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:09.186699  649678 type.go:168] "Request Body" body=""
	I1006 14:27:09.186792  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:09.187140  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:09.686782  649678 type.go:168] "Request Body" body=""
	I1006 14:27:09.686873  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:09.687256  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:10.185990  649678 type.go:168] "Request Body" body=""
	I1006 14:27:10.186073  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:10.186441  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:10.686081  649678 type.go:168] "Request Body" body=""
	I1006 14:27:10.686241  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:10.686611  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:10.686681  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:11.186246  649678 type.go:168] "Request Body" body=""
	I1006 14:27:11.186326  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:11.186676  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:11.686547  649678 type.go:168] "Request Body" body=""
	I1006 14:27:11.686634  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:11.686982  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:12.186629  649678 type.go:168] "Request Body" body=""
	I1006 14:27:12.186708  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:12.187095  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:12.686714  649678 type.go:168] "Request Body" body=""
	I1006 14:27:12.686808  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:12.687182  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:12.687301  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:13.186802  649678 type.go:168] "Request Body" body=""
	I1006 14:27:13.186882  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:13.187293  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:13.686883  649678 type.go:168] "Request Body" body=""
	I1006 14:27:13.686963  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:13.687307  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:14.185879  649678 type.go:168] "Request Body" body=""
	I1006 14:27:14.185967  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:14.186371  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:14.685892  649678 type.go:168] "Request Body" body=""
	I1006 14:27:14.685968  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:14.686306  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:15.185837  649678 type.go:168] "Request Body" body=""
	I1006 14:27:15.185912  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:15.186295  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:15.186372  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:15.685893  649678 type.go:168] "Request Body" body=""
	I1006 14:27:15.685969  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:15.686294  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:16.185990  649678 type.go:168] "Request Body" body=""
	I1006 14:27:16.186081  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:16.186492  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:16.686393  649678 type.go:168] "Request Body" body=""
	I1006 14:27:16.686478  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:16.686834  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:17.186384  649678 type.go:168] "Request Body" body=""
	I1006 14:27:17.186479  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:17.186834  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:17.186910  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:17.686523  649678 type.go:168] "Request Body" body=""
	I1006 14:27:17.686606  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:17.686989  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:18.186641  649678 type.go:168] "Request Body" body=""
	I1006 14:27:18.186739  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:18.187119  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:18.686755  649678 type.go:168] "Request Body" body=""
	I1006 14:27:18.686840  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:18.687189  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:19.186887  649678 type.go:168] "Request Body" body=""
	I1006 14:27:19.186975  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:19.187444  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:19.187516  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:19.686032  649678 type.go:168] "Request Body" body=""
	I1006 14:27:19.686111  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:19.686551  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:20.186447  649678 type.go:168] "Request Body" body=""
	I1006 14:27:20.186532  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:20.186905  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:20.686572  649678 type.go:168] "Request Body" body=""
	I1006 14:27:20.686660  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:20.687016  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:21.186692  649678 type.go:168] "Request Body" body=""
	I1006 14:27:21.186778  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:21.187150  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:21.685991  649678 type.go:168] "Request Body" body=""
	I1006 14:27:21.686073  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:21.686471  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:21.686536  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:22.186060  649678 type.go:168] "Request Body" body=""
	I1006 14:27:22.186159  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:22.186562  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:22.686161  649678 type.go:168] "Request Body" body=""
	I1006 14:27:22.686270  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:22.686631  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:23.186276  649678 type.go:168] "Request Body" body=""
	I1006 14:27:23.186365  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:23.186747  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:23.686349  649678 type.go:168] "Request Body" body=""
	I1006 14:27:23.686435  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:23.686810  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:23.686876  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:24.186408  649678 type.go:168] "Request Body" body=""
	I1006 14:27:24.186497  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:24.186870  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:24.686536  649678 type.go:168] "Request Body" body=""
	I1006 14:27:24.686611  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:24.686963  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:25.186632  649678 type.go:168] "Request Body" body=""
	I1006 14:27:25.186708  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:25.187049  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:25.686802  649678 type.go:168] "Request Body" body=""
	I1006 14:27:25.686882  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:25.687264  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:25.687322  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:26.185898  649678 type.go:168] "Request Body" body=""
	I1006 14:27:26.185976  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:26.186375  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:26.686124  649678 type.go:168] "Request Body" body=""
	I1006 14:27:26.686235  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:26.686552  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:27.186223  649678 type.go:168] "Request Body" body=""
	I1006 14:27:27.186300  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:27.186673  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:27.686275  649678 type.go:168] "Request Body" body=""
	I1006 14:27:27.686364  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:27.686719  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:28.186345  649678 type.go:168] "Request Body" body=""
	I1006 14:27:28.186434  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:28.186796  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:28.186861  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:28.686407  649678 type.go:168] "Request Body" body=""
	I1006 14:27:28.686495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:28.686858  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:29.186569  649678 type.go:168] "Request Body" body=""
	I1006 14:27:29.186651  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:29.187026  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:29.686656  649678 type.go:168] "Request Body" body=""
	I1006 14:27:29.686728  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:29.687080  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:30.185993  649678 type.go:168] "Request Body" body=""
	I1006 14:27:30.186084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:30.186484  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:30.686077  649678 type.go:168] "Request Body" body=""
	I1006 14:27:30.686155  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:30.686554  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:30.686627  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:31.186175  649678 type.go:168] "Request Body" body=""
	I1006 14:27:31.186286  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:31.186680  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:31.686528  649678 type.go:168] "Request Body" body=""
	I1006 14:27:31.686627  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:31.687001  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:32.186675  649678 type.go:168] "Request Body" body=""
	I1006 14:27:32.186758  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:32.187124  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:32.686856  649678 type.go:168] "Request Body" body=""
	I1006 14:27:32.686942  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:32.687307  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:32.687374  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:33.185899  649678 type.go:168] "Request Body" body=""
	I1006 14:27:33.185977  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:33.186402  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:33.685994  649678 type.go:168] "Request Body" body=""
	I1006 14:27:33.686074  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:33.686482  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:34.186077  649678 type.go:168] "Request Body" body=""
	I1006 14:27:34.186156  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:34.186558  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:34.686141  649678 type.go:168] "Request Body" body=""
	I1006 14:27:34.686238  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:34.686596  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:35.186192  649678 type.go:168] "Request Body" body=""
	I1006 14:27:35.186297  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:35.186668  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:35.186738  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:35.686376  649678 type.go:168] "Request Body" body=""
	I1006 14:27:35.686471  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:35.686827  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:36.186471  649678 type.go:168] "Request Body" body=""
	I1006 14:27:36.186549  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:36.186909  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:36.686773  649678 type.go:168] "Request Body" body=""
	I1006 14:27:36.686851  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:36.687225  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:37.186866  649678 type.go:168] "Request Body" body=""
	I1006 14:27:37.186943  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:37.187324  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:37.187402  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:37.685875  649678 type.go:168] "Request Body" body=""
	I1006 14:27:37.685951  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:37.686318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:38.185935  649678 type.go:168] "Request Body" body=""
	I1006 14:27:38.186022  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:38.186413  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:38.685990  649678 type.go:168] "Request Body" body=""
	I1006 14:27:38.686065  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:38.686446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:39.186040  649678 type.go:168] "Request Body" body=""
	I1006 14:27:39.186119  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:39.186517  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:39.686067  649678 type.go:168] "Request Body" body=""
	I1006 14:27:39.686152  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:39.686509  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:39.686570  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:40.186335  649678 type.go:168] "Request Body" body=""
	I1006 14:27:40.186421  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:40.186798  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:40.686383  649678 type.go:168] "Request Body" body=""
	I1006 14:27:40.686477  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:40.686843  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:41.186496  649678 type.go:168] "Request Body" body=""
	I1006 14:27:41.186589  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:41.186955  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:41.686485  649678 type.go:168] "Request Body" body=""
	I1006 14:27:41.686563  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:41.686938  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:41.687005  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:42.186439  649678 type.go:168] "Request Body" body=""
	I1006 14:27:42.186523  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:42.186890  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:42.686663  649678 type.go:168] "Request Body" body=""
	I1006 14:27:42.686739  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:42.687098  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:43.186774  649678 type.go:168] "Request Body" body=""
	I1006 14:27:43.186856  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:43.187251  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:43.686855  649678 type.go:168] "Request Body" body=""
	I1006 14:27:43.686937  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:43.687333  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:43.687401  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:44.185915  649678 type.go:168] "Request Body" body=""
	I1006 14:27:44.185993  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:44.186423  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:44.685989  649678 type.go:168] "Request Body" body=""
	I1006 14:27:44.686091  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:44.686498  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:45.186085  649678 type.go:168] "Request Body" body=""
	I1006 14:27:45.186165  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:45.186565  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:45.686116  649678 type.go:168] "Request Body" body=""
	I1006 14:27:45.686239  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:45.686593  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:46.186172  649678 type.go:168] "Request Body" body=""
	I1006 14:27:46.186282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:46.186664  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:46.186734  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:46.686523  649678 type.go:168] "Request Body" body=""
	I1006 14:27:46.686604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:46.686968  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:47.186636  649678 type.go:168] "Request Body" body=""
	I1006 14:27:47.186712  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:47.187063  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:47.686695  649678 type.go:168] "Request Body" body=""
	I1006 14:27:47.686772  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:47.687119  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:48.186827  649678 type.go:168] "Request Body" body=""
	I1006 14:27:48.186919  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:48.187317  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:48.187383  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:48.685929  649678 type.go:168] "Request Body" body=""
	I1006 14:27:48.686009  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:48.686363  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:49.185988  649678 type.go:168] "Request Body" body=""
	I1006 14:27:49.186066  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:49.186471  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:49.686018  649678 type.go:168] "Request Body" body=""
	I1006 14:27:49.686094  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:49.686456  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:50.186006  649678 node_ready.go:38] duration metric: took 6m0.000261558s for node "functional-135520" to be "Ready" ...
	I1006 14:27:50.189087  649678 out.go:203] 
	W1006 14:27:50.190513  649678 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1006 14:27:50.190545  649678 out.go:285] * 
	W1006 14:27:50.192353  649678 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:27:50.193614  649678 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:27:46 functional-135520 crio[2950]: time="2025-10-06T14:27:46.537419135Z" level=info msg="createCtr: removing container f80a0bc34f4906badae74343ef10a13edfa6593b57364ee2ca15c1e45cb44c93" id=ace47d13-2cff-4b23-8acb-40ad278ca282 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:46 functional-135520 crio[2950]: time="2025-10-06T14:27:46.53746026Z" level=info msg="createCtr: deleting container f80a0bc34f4906badae74343ef10a13edfa6593b57364ee2ca15c1e45cb44c93 from storage" id=ace47d13-2cff-4b23-8acb-40ad278ca282 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:46 functional-135520 crio[2950]: time="2025-10-06T14:27:46.539305817Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-135520_kube-system_f24ebbe4b3fc964d32e35d345c0d3653_0" id=ace47d13-2cff-4b23-8acb-40ad278ca282 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.516327205Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=f3c3cfee-4381-4062-9878-d3a682d6b077 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.517175909Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=e1ad7734-e28a-4ef9-ac87-ff6a11a9b1fa name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.51810305Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-135520/kube-apiserver" id=a1231ed3-9295-4326-ba3f-48f5ca67863c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.518388126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.522733428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.523451313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.542007657Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a1231ed3-9295-4326-ba3f-48f5ca67863c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.543292385Z" level=info msg="createCtr: deleting container ID 0dc82131c04d9ac24c1a4973bf654cfe15f2802424cb559d5727a3e886571a9c from idIndex" id=a1231ed3-9295-4326-ba3f-48f5ca67863c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.543325041Z" level=info msg="createCtr: removing container 0dc82131c04d9ac24c1a4973bf654cfe15f2802424cb559d5727a3e886571a9c" id=a1231ed3-9295-4326-ba3f-48f5ca67863c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.543353686Z" level=info msg="createCtr: deleting container 0dc82131c04d9ac24c1a4973bf654cfe15f2802424cb559d5727a3e886571a9c from storage" id=a1231ed3-9295-4326-ba3f-48f5ca67863c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:47 functional-135520 crio[2950]: time="2025-10-06T14:27:47.545165252Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-135520_kube-system_64c921c0d544efd1faaa2d85c050bc13_0" id=a1231ed3-9295-4326-ba3f-48f5ca67863c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.516281237Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=e09686fa-6b36-4172-b0fd-7c3937c59ca0 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.517137159Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=3f9670e4-c9b8-4ebd-ad5b-eca380b40295 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.518045551Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-135520/kube-scheduler" id=f28264a9-ff49-4a8a-a176-67a7f8d3e48f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.518303592Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.521571675Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.521988529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.53715491Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f28264a9-ff49-4a8a-a176-67a7f8d3e48f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.538436064Z" level=info msg="createCtr: deleting container ID 53f44639142744b47b0894826d110b7fa6706512d5ce9b8100673f21c18db971 from idIndex" id=f28264a9-ff49-4a8a-a176-67a7f8d3e48f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.538465371Z" level=info msg="createCtr: removing container 53f44639142744b47b0894826d110b7fa6706512d5ce9b8100673f21c18db971" id=f28264a9-ff49-4a8a-a176-67a7f8d3e48f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.538492974Z" level=info msg="createCtr: deleting container 53f44639142744b47b0894826d110b7fa6706512d5ce9b8100673f21c18db971 from storage" id=f28264a9-ff49-4a8a-a176-67a7f8d3e48f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:27:48 functional-135520 crio[2950]: time="2025-10-06T14:27:48.54058168Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-135520_kube-system_5115bd1eba9594a3f2b99b5d6a4b9d59_0" id=f28264a9-ff49-4a8a-a176-67a7f8d3e48f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:27:54.208699    4521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:27:54.209315    4521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:27:54.210467    4521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:27:54.210945    4521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:27:54.212569    4521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:27:54 up  5:10,  0 user,  load average: 0.39, 0.37, 0.53
	Linux functional-135520 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:27:46 functional-135520 kubelet[1801]: E1006 14:27:46.539677    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:27:46 functional-135520 kubelet[1801]:         container etcd start failed in pod etcd-functional-135520_kube-system(f24ebbe4b3fc964d32e35d345c0d3653): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:46 functional-135520 kubelet[1801]:  > logger="UnhandledError"
	Oct 06 14:27:46 functional-135520 kubelet[1801]: E1006 14:27:46.539706    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-135520" podUID="f24ebbe4b3fc964d32e35d345c0d3653"
	Oct 06 14:27:47 functional-135520 kubelet[1801]: E1006 14:27:47.515820    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:27:47 functional-135520 kubelet[1801]: E1006 14:27:47.545460    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:27:47 functional-135520 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:47 functional-135520 kubelet[1801]:  > podSandboxID="c8563dd0b37e233739b3c3a382aa7aa99838d00dddfb4c17bcee8072fc8b2e15"
	Oct 06 14:27:47 functional-135520 kubelet[1801]: E1006 14:27:47.545569    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:27:47 functional-135520 kubelet[1801]:         container kube-apiserver start failed in pod kube-apiserver-functional-135520_kube-system(64c921c0d544efd1faaa2d85c050bc13): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:47 functional-135520 kubelet[1801]:  > logger="UnhandledError"
	Oct 06 14:27:47 functional-135520 kubelet[1801]: E1006 14:27:47.545614    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-135520" podUID="64c921c0d544efd1faaa2d85c050bc13"
	Oct 06 14:27:48 functional-135520 kubelet[1801]: E1006 14:27:48.515740    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:27:48 functional-135520 kubelet[1801]: E1006 14:27:48.540814    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:27:48 functional-135520 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:48 functional-135520 kubelet[1801]:  > podSandboxID="a92786c5eb4654629f78c624cdcfef7af25c891888e7f9c4c81b2755c377da1a"
	Oct 06 14:27:48 functional-135520 kubelet[1801]: E1006 14:27:48.540922    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:27:48 functional-135520 kubelet[1801]:         container kube-scheduler start failed in pod kube-scheduler-functional-135520_kube-system(5115bd1eba9594a3f2b99b5d6a4b9d59): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:48 functional-135520 kubelet[1801]:  > logger="UnhandledError"
	Oct 06 14:27:48 functional-135520 kubelet[1801]: E1006 14:27:48.540950    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-135520" podUID="5115bd1eba9594a3f2b99b5d6a4b9d59"
	Oct 06 14:27:50 functional-135520 kubelet[1801]: E1006 14:27:50.834294    1801 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-135520.186beca30fea008b\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-135520.186beca30fea008b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-135520,UID:functional-135520,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-135520 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-135520,},FirstTimestamp:2025-10-06 14:17:44.509128843 +0000 UTC m=+0.464938753,LastTimestamp:2025-10-06 14:17:44.510554344 +0000 UTC m=+0.466364247,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-135520,}"
	Oct 06 14:27:52 functional-135520 kubelet[1801]: E1006 14:27:52.072853    1801 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 06 14:27:52 functional-135520 kubelet[1801]: E1006 14:27:52.201762    1801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-135520?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 06 14:27:52 functional-135520 kubelet[1801]: I1006 14:27:52.413270    1801 kubelet_node_status.go:75] "Attempting to register node" node="functional-135520"
	Oct 06 14:27:52 functional-135520 kubelet[1801]: E1006 14:27:52.413869    1801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-135520"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520: exit status 2 (310.644049ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-135520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (2.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 kubectl -- --context functional-135520 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 kubectl -- --context functional-135520 get pods: exit status 1 (109.754369ms)

                                                
                                                
** stderr ** 
	E1006 14:28:02.340417  655165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:28:02.340884  655165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:28:02.341988  655165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:28:02.342340  655165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:28:02.343737  655165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-amd64 -p functional-135520 kubectl -- --context functional-135520 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-135520
helpers_test.go:243: (dbg) docker inspect functional-135520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	        "Created": "2025-10-06T14:13:32.283355011Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:13:32.318096257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hosts",
	        "LogPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20-json.log",
	        "Name": "/functional-135520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-135520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-135520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	                "LowerDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-135520",
	                "Source": "/var/lib/docker/volumes/functional-135520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-135520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-135520",
	                "name.minikube.sigs.k8s.io": "functional-135520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6368ffca3e5840f94a34614c511d9f0a0a4ca0d05de4fe1f94c8bfdc332f1a62",
	            "SandboxKey": "/var/run/docker/netns/6368ffca3e58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-135520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d1:94:25:38:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f712be59dd18dac98bed5f234c9f77a39e85277143d6f46285adcd3b0185d552",
	                    "EndpointID": "b816964b653b1b5116e3262dfdc87af272931013ef5b9e2714c9ff7357118a6f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-135520",
	                        "3dd9a226ea42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520: exit status 2 (304.7476ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 logs -n 25
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-500584 --log_dir /tmp/nospam-500584 pause                                                              │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                            │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                            │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                            │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                               │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                               │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                               │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ delete  │ -p nospam-500584                                                                                              │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ start   │ -p functional-135520 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ start   │ -p functional-135520 --alsologtostderr -v=8                                                                   │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:21 UTC │                     │
	│ cache   │ functional-135520 cache add registry.k8s.io/pause:3.1                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:27 UTC │
	│ cache   │ functional-135520 cache add registry.k8s.io/pause:3.3                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:27 UTC │
	│ cache   │ functional-135520 cache add registry.k8s.io/pause:latest                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:27 UTC │
	│ cache   │ functional-135520 cache add minikube-local-cache-test:functional-135520                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ functional-135520 cache delete minikube-local-cache-test:functional-135520                                    │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl images                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │                     │
	│ cache   │ functional-135520 cache reload                                                                                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ kubectl │ functional-135520 kubectl -- --context functional-135520 get pods                                             │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:21:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:21:46.323016  649678 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:21:46.323271  649678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:21:46.323279  649678 out.go:374] Setting ErrFile to fd 2...
	I1006 14:21:46.323283  649678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:21:46.323475  649678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:21:46.323908  649678 out.go:368] Setting JSON to false
	I1006 14:21:46.324826  649678 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18242,"bootTime":1759742264,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:21:46.324926  649678 start.go:140] virtualization: kvm guest
	I1006 14:21:46.326925  649678 out.go:179] * [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:21:46.327942  649678 notify.go:220] Checking for updates...
	I1006 14:21:46.327965  649678 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:21:46.329155  649678 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:21:46.330229  649678 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:46.331298  649678 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:21:46.332353  649678 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:21:46.333341  649678 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:21:46.334666  649678 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:21:46.334805  649678 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:21:46.359710  649678 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:21:46.359861  649678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:21:46.415678  649678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:21:46.405264016 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:21:46.415787  649678 docker.go:318] overlay module found
	I1006 14:21:46.417155  649678 out.go:179] * Using the docker driver based on existing profile
	I1006 14:21:46.418292  649678 start.go:304] selected driver: docker
	I1006 14:21:46.418308  649678 start.go:924] validating driver "docker" against &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:46.418380  649678 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:21:46.418468  649678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:21:46.473903  649678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:21:46.464043789 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:21:46.474648  649678 cni.go:84] Creating CNI manager for ""
	I1006 14:21:46.474719  649678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:21:46.474770  649678 start.go:348] cluster config:
	{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:46.476311  649678 out.go:179] * Starting "functional-135520" primary control-plane node in "functional-135520" cluster
	I1006 14:21:46.477235  649678 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:21:46.478074  649678 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:21:46.479119  649678 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:21:46.479164  649678 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:21:46.479185  649678 cache.go:58] Caching tarball of preloaded images
	I1006 14:21:46.479228  649678 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:21:46.479294  649678 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:21:46.479309  649678 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:21:46.479413  649678 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/config.json ...
	I1006 14:21:46.499695  649678 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:21:46.499723  649678 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:21:46.499744  649678 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:21:46.499779  649678 start.go:360] acquireMachinesLock for functional-135520: {Name:mk634323c4619e77647ac9d9aaca492e399526ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:21:46.499864  649678 start.go:364] duration metric: took 47.895µs to acquireMachinesLock for "functional-135520"
	I1006 14:21:46.499886  649678 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:21:46.499892  649678 fix.go:54] fixHost starting: 
	I1006 14:21:46.500243  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:46.517601  649678 fix.go:112] recreateIfNeeded on functional-135520: state=Running err=<nil>
	W1006 14:21:46.517640  649678 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:21:46.519112  649678 out.go:252] * Updating the running docker "functional-135520" container ...
	I1006 14:21:46.519143  649678 machine.go:93] provisionDockerMachine start ...
	I1006 14:21:46.519223  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:46.537175  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:46.537424  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:46.537438  649678 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:21:46.682374  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:21:46.682420  649678 ubuntu.go:182] provisioning hostname "functional-135520"
	I1006 14:21:46.682484  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:46.700103  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:46.700382  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:46.700401  649678 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-135520 && echo "functional-135520" | sudo tee /etc/hostname
	I1006 14:21:46.853845  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:21:46.853924  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:46.872015  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:46.872265  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:46.872284  649678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-135520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-135520/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-135520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:21:47.017154  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:21:47.017184  649678 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:21:47.017239  649678 ubuntu.go:190] setting up certificates
	I1006 14:21:47.017253  649678 provision.go:84] configureAuth start
	I1006 14:21:47.017340  649678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:21:47.035104  649678 provision.go:143] copyHostCerts
	I1006 14:21:47.035140  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:21:47.035175  649678 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:21:47.035198  649678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:21:47.035336  649678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:21:47.035448  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:21:47.035468  649678 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:21:47.035478  649678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:21:47.035513  649678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:21:47.035575  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:21:47.035593  649678 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:21:47.035599  649678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:21:47.035623  649678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:21:47.035688  649678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.functional-135520 san=[127.0.0.1 192.168.49.2 functional-135520 localhost minikube]
	I1006 14:21:47.332166  649678 provision.go:177] copyRemoteCerts
	I1006 14:21:47.332258  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:21:47.332304  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.351185  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:47.453191  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:21:47.453264  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:21:47.470840  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:21:47.470907  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 14:21:47.487466  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:21:47.487518  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:21:47.504343  649678 provision.go:87] duration metric: took 487.07429ms to configureAuth
	I1006 14:21:47.504374  649678 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:21:47.504541  649678 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:21:47.504639  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.523029  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:47.523280  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:47.523307  649678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:21:47.788227  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:21:47.788259  649678 machine.go:96] duration metric: took 1.269106143s to provisionDockerMachine
	I1006 14:21:47.788275  649678 start.go:293] postStartSetup for "functional-135520" (driver="docker")
	I1006 14:21:47.788290  649678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:21:47.788372  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:21:47.788428  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.805850  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:47.908894  649678 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:21:47.912773  649678 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1006 14:21:47.912795  649678 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1006 14:21:47.912801  649678 command_runner.go:130] > VERSION_ID="12"
	I1006 14:21:47.912807  649678 command_runner.go:130] > VERSION="12 (bookworm)"
	I1006 14:21:47.912813  649678 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1006 14:21:47.912819  649678 command_runner.go:130] > ID=debian
	I1006 14:21:47.912827  649678 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1006 14:21:47.912834  649678 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1006 14:21:47.912843  649678 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1006 14:21:47.912900  649678 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:21:47.912919  649678 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:21:47.912929  649678 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:21:47.912988  649678 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:21:47.913065  649678 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:21:47.913078  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:21:47.913143  649678 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> hosts in /etc/test/nested/copy/629719
	I1006 14:21:47.913151  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> /etc/test/nested/copy/629719/hosts
	I1006 14:21:47.913182  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/629719
	I1006 14:21:47.920839  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:21:47.937786  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts --> /etc/test/nested/copy/629719/hosts (40 bytes)
	I1006 14:21:47.954760  649678 start.go:296] duration metric: took 166.455369ms for postStartSetup
	I1006 14:21:47.954834  649678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:21:47.954870  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.972368  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:48.072535  649678 command_runner.go:130] > 38%
	I1006 14:21:48.072624  649678 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:21:48.077267  649678 command_runner.go:130] > 182G
	I1006 14:21:48.077574  649678 fix.go:56] duration metric: took 1.577678011s for fixHost
	I1006 14:21:48.077595  649678 start.go:83] releasing machines lock for "functional-135520", held for 1.577717734s
	I1006 14:21:48.077675  649678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:21:48.095670  649678 ssh_runner.go:195] Run: cat /version.json
	I1006 14:21:48.095722  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:48.095754  649678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:21:48.095827  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:48.113591  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:48.115313  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:48.268773  649678 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1006 14:21:48.268839  649678 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1006 14:21:48.268953  649678 ssh_runner.go:195] Run: systemctl --version
	I1006 14:21:48.275683  649678 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1006 14:21:48.275717  649678 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1006 14:21:48.275778  649678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:21:48.311695  649678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1006 14:21:48.316662  649678 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1006 14:21:48.316719  649678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:21:48.316778  649678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:21:48.324682  649678 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:21:48.324705  649678 start.go:495] detecting cgroup driver to use...
	I1006 14:21:48.324740  649678 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:21:48.324780  649678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:21:48.339343  649678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:21:48.350971  649678 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:21:48.351020  649678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:21:48.364377  649678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:21:48.375810  649678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:21:48.466998  649678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:21:48.555437  649678 docker.go:234] disabling docker service ...
	I1006 14:21:48.555507  649678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:21:48.569642  649678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:21:48.581371  649678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:21:48.660341  649678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:21:48.745051  649678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:21:48.757689  649678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:21:48.770829  649678 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1006 14:21:48.771733  649678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:21:48.771806  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.781084  649678 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:21:48.781164  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.790125  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.798751  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.807637  649678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:21:48.815986  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.824650  649678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.832873  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.841368  649678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:21:48.847999  649678 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1006 14:21:48.848646  649678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:21:48.855735  649678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:48.941247  649678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:21:49.054732  649678 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:21:49.054813  649678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:21:49.059042  649678 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1006 14:21:49.059070  649678 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1006 14:21:49.059079  649678 command_runner.go:130] > Device: 0,59	Inode: 3845        Links: 1
	I1006 14:21:49.059086  649678 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 14:21:49.059091  649678 command_runner.go:130] > Access: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059104  649678 command_runner.go:130] > Modify: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059109  649678 command_runner.go:130] > Change: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059113  649678 command_runner.go:130] >  Birth: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059133  649678 start.go:563] Will wait 60s for crictl version
	I1006 14:21:49.059181  649678 ssh_runner.go:195] Run: which crictl
	I1006 14:21:49.062689  649678 command_runner.go:130] > /usr/local/bin/crictl
	I1006 14:21:49.062764  649678 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:21:49.086605  649678 command_runner.go:130] > Version:  0.1.0
	I1006 14:21:49.086623  649678 command_runner.go:130] > RuntimeName:  cri-o
	I1006 14:21:49.086627  649678 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1006 14:21:49.086632  649678 command_runner.go:130] > RuntimeApiVersion:  v1
	I1006 14:21:49.088423  649678 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:21:49.088499  649678 ssh_runner.go:195] Run: crio --version
	I1006 14:21:49.118625  649678 command_runner.go:130] > crio version 1.34.1
	I1006 14:21:49.118652  649678 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1006 14:21:49.118659  649678 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1006 14:21:49.118666  649678 command_runner.go:130] >    GitTreeState:   dirty
	I1006 14:21:49.118672  649678 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1006 14:21:49.118678  649678 command_runner.go:130] >    GoVersion:      go1.24.6
	I1006 14:21:49.118683  649678 command_runner.go:130] >    Compiler:       gc
	I1006 14:21:49.118692  649678 command_runner.go:130] >    Platform:       linux/amd64
	I1006 14:21:49.118700  649678 command_runner.go:130] >    Linkmode:       static
	I1006 14:21:49.118708  649678 command_runner.go:130] >    BuildTags:
	I1006 14:21:49.118718  649678 command_runner.go:130] >      static
	I1006 14:21:49.118724  649678 command_runner.go:130] >      netgo
	I1006 14:21:49.118729  649678 command_runner.go:130] >      osusergo
	I1006 14:21:49.118739  649678 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1006 14:21:49.118745  649678 command_runner.go:130] >      seccomp
	I1006 14:21:49.118749  649678 command_runner.go:130] >      apparmor
	I1006 14:21:49.118753  649678 command_runner.go:130] >      selinux
	I1006 14:21:49.118757  649678 command_runner.go:130] >    LDFlags:          unknown
	I1006 14:21:49.118781  649678 command_runner.go:130] >    SeccompEnabled:   true
	I1006 14:21:49.118789  649678 command_runner.go:130] >    AppArmorEnabled:  false
	I1006 14:21:49.118869  649678 ssh_runner.go:195] Run: crio --version
	I1006 14:21:49.147173  649678 command_runner.go:130] > crio version 1.34.1
	I1006 14:21:49.147230  649678 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1006 14:21:49.147241  649678 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1006 14:21:49.147249  649678 command_runner.go:130] >    GitTreeState:   dirty
	I1006 14:21:49.147257  649678 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1006 14:21:49.147263  649678 command_runner.go:130] >    GoVersion:      go1.24.6
	I1006 14:21:49.147267  649678 command_runner.go:130] >    Compiler:       gc
	I1006 14:21:49.147283  649678 command_runner.go:130] >    Platform:       linux/amd64
	I1006 14:21:49.147292  649678 command_runner.go:130] >    Linkmode:       static
	I1006 14:21:49.147296  649678 command_runner.go:130] >    BuildTags:
	I1006 14:21:49.147299  649678 command_runner.go:130] >      static
	I1006 14:21:49.147303  649678 command_runner.go:130] >      netgo
	I1006 14:21:49.147309  649678 command_runner.go:130] >      osusergo
	I1006 14:21:49.147313  649678 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1006 14:21:49.147320  649678 command_runner.go:130] >      seccomp
	I1006 14:21:49.147324  649678 command_runner.go:130] >      apparmor
	I1006 14:21:49.147330  649678 command_runner.go:130] >      selinux
	I1006 14:21:49.147334  649678 command_runner.go:130] >    LDFlags:          unknown
	I1006 14:21:49.147340  649678 command_runner.go:130] >    SeccompEnabled:   true
	I1006 14:21:49.147443  649678 command_runner.go:130] >    AppArmorEnabled:  false
	I1006 14:21:49.149760  649678 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:21:49.150923  649678 cli_runner.go:164] Run: docker network inspect functional-135520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:21:49.168305  649678 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:21:49.172524  649678 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1006 14:21:49.172624  649678 kubeadm.go:883] updating cluster {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:21:49.172735  649678 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:21:49.172777  649678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:21:49.203555  649678 command_runner.go:130] > {
	I1006 14:21:49.203573  649678 command_runner.go:130] >   "images":  [
	I1006 14:21:49.203577  649678 command_runner.go:130] >     {
	I1006 14:21:49.203585  649678 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1006 14:21:49.203589  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203596  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1006 14:21:49.203599  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203603  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203613  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1006 14:21:49.203619  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1006 14:21:49.203623  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203628  649678 command_runner.go:130] >       "size":  "109379124",
	I1006 14:21:49.203634  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203641  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203647  649678 command_runner.go:130] >     },
	I1006 14:21:49.203650  649678 command_runner.go:130] >     {
	I1006 14:21:49.203656  649678 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1006 14:21:49.203660  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203665  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1006 14:21:49.203671  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203676  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203684  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1006 14:21:49.203694  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1006 14:21:49.203697  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203701  649678 command_runner.go:130] >       "size":  "31470524",
	I1006 14:21:49.203705  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203716  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203722  649678 command_runner.go:130] >     },
	I1006 14:21:49.203725  649678 command_runner.go:130] >     {
	I1006 14:21:49.203731  649678 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1006 14:21:49.203737  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203742  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1006 14:21:49.203748  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203752  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203759  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1006 14:21:49.203768  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1006 14:21:49.203771  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203775  649678 command_runner.go:130] >       "size":  "76103547",
	I1006 14:21:49.203779  649678 command_runner.go:130] >       "username":  "nonroot",
	I1006 14:21:49.203783  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203785  649678 command_runner.go:130] >     },
	I1006 14:21:49.203789  649678 command_runner.go:130] >     {
	I1006 14:21:49.203794  649678 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1006 14:21:49.203799  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203804  649678 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1006 14:21:49.203807  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203811  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203817  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1006 14:21:49.203826  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1006 14:21:49.203829  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203836  649678 command_runner.go:130] >       "size":  "195976448",
	I1006 14:21:49.203840  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.203844  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.203847  649678 command_runner.go:130] >       },
	I1006 14:21:49.203855  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203861  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203864  649678 command_runner.go:130] >     },
	I1006 14:21:49.203867  649678 command_runner.go:130] >     {
	I1006 14:21:49.203873  649678 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1006 14:21:49.203879  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203884  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1006 14:21:49.203887  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203891  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203901  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1006 14:21:49.203907  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1006 14:21:49.203913  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203916  649678 command_runner.go:130] >       "size":  "89046001",
	I1006 14:21:49.203920  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.203925  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.203928  649678 command_runner.go:130] >       },
	I1006 14:21:49.203931  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203935  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203938  649678 command_runner.go:130] >     },
	I1006 14:21:49.203941  649678 command_runner.go:130] >     {
	I1006 14:21:49.203947  649678 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1006 14:21:49.203953  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203958  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1006 14:21:49.203961  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203965  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203972  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1006 14:21:49.203981  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1006 14:21:49.203984  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203988  649678 command_runner.go:130] >       "size":  "76004181",
	I1006 14:21:49.203992  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.203998  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.204001  649678 command_runner.go:130] >       },
	I1006 14:21:49.204005  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204011  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.204014  649678 command_runner.go:130] >     },
	I1006 14:21:49.204019  649678 command_runner.go:130] >     {
	I1006 14:21:49.204024  649678 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1006 14:21:49.204028  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.204033  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1006 14:21:49.204036  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204042  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.204055  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1006 14:21:49.204067  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1006 14:21:49.204073  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204078  649678 command_runner.go:130] >       "size":  "73138073",
	I1006 14:21:49.204081  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204085  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.204089  649678 command_runner.go:130] >     },
	I1006 14:21:49.204092  649678 command_runner.go:130] >     {
	I1006 14:21:49.204097  649678 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1006 14:21:49.204104  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.204108  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1006 14:21:49.204112  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204116  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.204123  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1006 14:21:49.204153  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1006 14:21:49.204160  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204164  649678 command_runner.go:130] >       "size":  "53844823",
	I1006 14:21:49.204167  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.204170  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.204174  649678 command_runner.go:130] >       },
	I1006 14:21:49.204178  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204183  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.204188  649678 command_runner.go:130] >     },
	I1006 14:21:49.204191  649678 command_runner.go:130] >     {
	I1006 14:21:49.204197  649678 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1006 14:21:49.204222  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.204230  649678 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1006 14:21:49.204237  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204243  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.204253  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1006 14:21:49.204260  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1006 14:21:49.204266  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204269  649678 command_runner.go:130] >       "size":  "742092",
	I1006 14:21:49.204273  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.204277  649678 command_runner.go:130] >         "value":  "65535"
	I1006 14:21:49.204280  649678 command_runner.go:130] >       },
	I1006 14:21:49.204284  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204288  649678 command_runner.go:130] >       "pinned":  true
	I1006 14:21:49.204291  649678 command_runner.go:130] >     }
	I1006 14:21:49.204294  649678 command_runner.go:130] >   ]
	I1006 14:21:49.204299  649678 command_runner.go:130] > }
	I1006 14:21:49.205550  649678 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:21:49.205570  649678 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:21:49.205618  649678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:21:49.229611  649678 command_runner.go:130] > {
	I1006 14:21:49.229630  649678 command_runner.go:130] >   "images":  [
	I1006 14:21:49.229637  649678 command_runner.go:130] >     {
	I1006 14:21:49.229647  649678 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1006 14:21:49.229656  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.229664  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1006 14:21:49.229669  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229675  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.229690  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1006 14:21:49.229706  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1006 14:21:49.229712  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229738  649678 command_runner.go:130] >       "size":  "109379124",
	I1006 14:21:49.229748  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.229755  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.229761  649678 command_runner.go:130] >     },
	I1006 14:21:49.229770  649678 command_runner.go:130] >     {
	I1006 14:21:49.229780  649678 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1006 14:21:49.229789  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.229799  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1006 14:21:49.229807  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229814  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.229830  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1006 14:21:49.229846  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1006 14:21:49.229854  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229863  649678 command_runner.go:130] >       "size":  "31470524",
	I1006 14:21:49.229872  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.229894  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.229902  649678 command_runner.go:130] >     },
	I1006 14:21:49.229907  649678 command_runner.go:130] >     {
	I1006 14:21:49.229918  649678 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1006 14:21:49.229927  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.229936  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1006 14:21:49.229943  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229951  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.229965  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1006 14:21:49.229980  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1006 14:21:49.229999  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230007  649678 command_runner.go:130] >       "size":  "76103547",
	I1006 14:21:49.230016  649678 command_runner.go:130] >       "username":  "nonroot",
	I1006 14:21:49.230023  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230031  649678 command_runner.go:130] >     },
	I1006 14:21:49.230036  649678 command_runner.go:130] >     {
	I1006 14:21:49.230050  649678 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1006 14:21:49.230059  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230068  649678 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1006 14:21:49.230076  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230083  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230097  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1006 14:21:49.230112  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1006 14:21:49.230119  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230127  649678 command_runner.go:130] >       "size":  "195976448",
	I1006 14:21:49.230135  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230143  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230152  649678 command_runner.go:130] >       },
	I1006 14:21:49.230165  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230175  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230181  649678 command_runner.go:130] >     },
	I1006 14:21:49.230189  649678 command_runner.go:130] >     {
	I1006 14:21:49.230220  649678 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1006 14:21:49.230239  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230249  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1006 14:21:49.230257  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230264  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230279  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1006 14:21:49.230306  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1006 14:21:49.230314  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230321  649678 command_runner.go:130] >       "size":  "89046001",
	I1006 14:21:49.230329  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230336  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230345  649678 command_runner.go:130] >       },
	I1006 14:21:49.230352  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230361  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230367  649678 command_runner.go:130] >     },
	I1006 14:21:49.230375  649678 command_runner.go:130] >     {
	I1006 14:21:49.230386  649678 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1006 14:21:49.230395  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230406  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1006 14:21:49.230414  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230421  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230436  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1006 14:21:49.230451  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1006 14:21:49.230460  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230467  649678 command_runner.go:130] >       "size":  "76004181",
	I1006 14:21:49.230484  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230493  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230500  649678 command_runner.go:130] >       },
	I1006 14:21:49.230507  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230516  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230523  649678 command_runner.go:130] >     },
	I1006 14:21:49.230529  649678 command_runner.go:130] >     {
	I1006 14:21:49.230542  649678 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1006 14:21:49.230549  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230568  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1006 14:21:49.230576  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230583  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230599  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1006 14:21:49.230614  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1006 14:21:49.230621  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230628  649678 command_runner.go:130] >       "size":  "73138073",
	I1006 14:21:49.230637  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230645  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230653  649678 command_runner.go:130] >     },
	I1006 14:21:49.230658  649678 command_runner.go:130] >     {
	I1006 14:21:49.230665  649678 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1006 14:21:49.230670  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230679  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1006 14:21:49.230687  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230693  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230706  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1006 14:21:49.230734  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1006 14:21:49.230745  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230751  649678 command_runner.go:130] >       "size":  "53844823",
	I1006 14:21:49.230758  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230767  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230773  649678 command_runner.go:130] >       },
	I1006 14:21:49.230783  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230791  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230799  649678 command_runner.go:130] >     },
	I1006 14:21:49.230805  649678 command_runner.go:130] >     {
	I1006 14:21:49.230819  649678 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1006 14:21:49.230828  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230837  649678 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1006 14:21:49.230845  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230852  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230865  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1006 14:21:49.230878  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1006 14:21:49.230887  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230894  649678 command_runner.go:130] >       "size":  "742092",
	I1006 14:21:49.230902  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230909  649678 command_runner.go:130] >         "value":  "65535"
	I1006 14:21:49.230918  649678 command_runner.go:130] >       },
	I1006 14:21:49.230924  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230934  649678 command_runner.go:130] >       "pinned":  true
	I1006 14:21:49.230940  649678 command_runner.go:130] >     }
	I1006 14:21:49.230948  649678 command_runner.go:130] >   ]
	I1006 14:21:49.230953  649678 command_runner.go:130] > }
	I1006 14:21:49.231845  649678 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:21:49.231866  649678 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:21:49.231873  649678 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1006 14:21:49.232021  649678 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-135520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:21:49.232106  649678 ssh_runner.go:195] Run: crio config
	I1006 14:21:49.273258  649678 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1006 14:21:49.273298  649678 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1006 14:21:49.273306  649678 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1006 14:21:49.273309  649678 command_runner.go:130] > #
	I1006 14:21:49.273321  649678 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1006 14:21:49.273332  649678 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1006 14:21:49.273343  649678 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1006 14:21:49.273357  649678 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1006 14:21:49.273367  649678 command_runner.go:130] > # reload'.
	I1006 14:21:49.273377  649678 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1006 14:21:49.273389  649678 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1006 14:21:49.273403  649678 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1006 14:21:49.273413  649678 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1006 14:21:49.273423  649678 command_runner.go:130] > [crio]
	I1006 14:21:49.273433  649678 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1006 14:21:49.273446  649678 command_runner.go:130] > # containers images, in this directory.
	I1006 14:21:49.273471  649678 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1006 14:21:49.273486  649678 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1006 14:21:49.273494  649678 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1006 14:21:49.273512  649678 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1006 14:21:49.273525  649678 command_runner.go:130] > # imagestore = ""
	I1006 14:21:49.273535  649678 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1006 14:21:49.273548  649678 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1006 14:21:49.273561  649678 command_runner.go:130] > # storage_driver = "overlay"
	I1006 14:21:49.273574  649678 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1006 14:21:49.273591  649678 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1006 14:21:49.273599  649678 command_runner.go:130] > # storage_option = [
	I1006 14:21:49.273613  649678 command_runner.go:130] > # ]
	I1006 14:21:49.273623  649678 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1006 14:21:49.273635  649678 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1006 14:21:49.273642  649678 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1006 14:21:49.273652  649678 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1006 14:21:49.273664  649678 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1006 14:21:49.273678  649678 command_runner.go:130] > # always happen on a node reboot
	I1006 14:21:49.273690  649678 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1006 14:21:49.273712  649678 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1006 14:21:49.273725  649678 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1006 14:21:49.273743  649678 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1006 14:21:49.273751  649678 command_runner.go:130] > # version_file_persist = ""
	I1006 14:21:49.273764  649678 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1006 14:21:49.273781  649678 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1006 14:21:49.273792  649678 command_runner.go:130] > # internal_wipe = true
	I1006 14:21:49.273806  649678 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1006 14:21:49.273819  649678 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1006 14:21:49.273829  649678 command_runner.go:130] > # internal_repair = true
	I1006 14:21:49.273842  649678 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1006 14:21:49.273856  649678 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1006 14:21:49.273870  649678 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1006 14:21:49.273880  649678 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1006 14:21:49.273894  649678 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1006 14:21:49.273901  649678 command_runner.go:130] > [crio.api]
	I1006 14:21:49.273915  649678 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1006 14:21:49.273926  649678 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1006 14:21:49.273935  649678 command_runner.go:130] > # IP address on which the stream server will listen.
	I1006 14:21:49.273947  649678 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1006 14:21:49.273963  649678 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1006 14:21:49.273975  649678 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1006 14:21:49.273987  649678 command_runner.go:130] > # stream_port = "0"
	I1006 14:21:49.274002  649678 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1006 14:21:49.274013  649678 command_runner.go:130] > # stream_enable_tls = false
	I1006 14:21:49.274023  649678 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1006 14:21:49.274035  649678 command_runner.go:130] > # stream_idle_timeout = ""
	I1006 14:21:49.274045  649678 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1006 14:21:49.274059  649678 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1006 14:21:49.274068  649678 command_runner.go:130] > # stream_tls_cert = ""
	I1006 14:21:49.274083  649678 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1006 14:21:49.274109  649678 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1006 14:21:49.274132  649678 command_runner.go:130] > # stream_tls_key = ""
	I1006 14:21:49.274143  649678 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1006 14:21:49.274153  649678 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1006 14:21:49.274162  649678 command_runner.go:130] > # automatically pick up the changes.
	I1006 14:21:49.274173  649678 command_runner.go:130] > # stream_tls_ca = ""
	I1006 14:21:49.274218  649678 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1006 14:21:49.274233  649678 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1006 14:21:49.274245  649678 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1006 14:21:49.274257  649678 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1006 14:21:49.274268  649678 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1006 14:21:49.274281  649678 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1006 14:21:49.274293  649678 command_runner.go:130] > [crio.runtime]
	I1006 14:21:49.274303  649678 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1006 14:21:49.274315  649678 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1006 14:21:49.274325  649678 command_runner.go:130] > # "nofile=1024:2048"
	I1006 14:21:49.274336  649678 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1006 14:21:49.274347  649678 command_runner.go:130] > # default_ulimits = [
	I1006 14:21:49.274353  649678 command_runner.go:130] > # ]
	I1006 14:21:49.274363  649678 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1006 14:21:49.274374  649678 command_runner.go:130] > # no_pivot = false
	I1006 14:21:49.274384  649678 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1006 14:21:49.274399  649678 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1006 14:21:49.274410  649678 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1006 14:21:49.274425  649678 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1006 14:21:49.274437  649678 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1006 14:21:49.274453  649678 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1006 14:21:49.274464  649678 command_runner.go:130] > # conmon = ""
	I1006 14:21:49.274473  649678 command_runner.go:130] > # Cgroup setting for conmon
	I1006 14:21:49.274487  649678 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1006 14:21:49.274498  649678 command_runner.go:130] > conmon_cgroup = "pod"
	I1006 14:21:49.274508  649678 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1006 14:21:49.274520  649678 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1006 14:21:49.274533  649678 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1006 14:21:49.274545  649678 command_runner.go:130] > # conmon_env = [
	I1006 14:21:49.274559  649678 command_runner.go:130] > # ]
	I1006 14:21:49.274566  649678 command_runner.go:130] > # Additional environment variables to set for all the
	I1006 14:21:49.274574  649678 command_runner.go:130] > # containers. These are overridden if set in the
	I1006 14:21:49.274583  649678 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1006 14:21:49.274593  649678 command_runner.go:130] > # default_env = [
	I1006 14:21:49.274599  649678 command_runner.go:130] > # ]
	I1006 14:21:49.274610  649678 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1006 14:21:49.274625  649678 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1006 14:21:49.274633  649678 command_runner.go:130] > # selinux = false
	I1006 14:21:49.274646  649678 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1006 14:21:49.274658  649678 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1006 14:21:49.274677  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274687  649678 command_runner.go:130] > # seccomp_profile = ""
	I1006 14:21:49.274698  649678 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1006 14:21:49.274707  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274715  649678 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1006 14:21:49.274733  649678 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1006 14:21:49.274744  649678 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1006 14:21:49.274754  649678 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1006 14:21:49.274768  649678 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1006 14:21:49.274776  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274784  649678 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1006 14:21:49.274794  649678 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1006 14:21:49.274802  649678 command_runner.go:130] > # the cgroup blockio controller.
	I1006 14:21:49.274809  649678 command_runner.go:130] > # blockio_config_file = ""
	I1006 14:21:49.274820  649678 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1006 14:21:49.274828  649678 command_runner.go:130] > # blockio parameters.
	I1006 14:21:49.274840  649678 command_runner.go:130] > # blockio_reload = false
	I1006 14:21:49.274849  649678 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1006 14:21:49.274856  649678 command_runner.go:130] > # irqbalance daemon.
	I1006 14:21:49.274870  649678 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1006 14:21:49.274886  649678 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1006 14:21:49.274901  649678 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1006 14:21:49.274915  649678 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1006 14:21:49.274927  649678 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1006 14:21:49.274933  649678 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1006 14:21:49.274941  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274945  649678 command_runner.go:130] > # rdt_config_file = ""
	I1006 14:21:49.274950  649678 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1006 14:21:49.274955  649678 command_runner.go:130] > # cgroup_manager = "systemd"
	I1006 14:21:49.274962  649678 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1006 14:21:49.274968  649678 command_runner.go:130] > # separate_pull_cgroup = ""
	I1006 14:21:49.274974  649678 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1006 14:21:49.274982  649678 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1006 14:21:49.274986  649678 command_runner.go:130] > # will be added.
	I1006 14:21:49.274991  649678 command_runner.go:130] > # default_capabilities = [
	I1006 14:21:49.274994  649678 command_runner.go:130] > # 	"CHOWN",
	I1006 14:21:49.274998  649678 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1006 14:21:49.275001  649678 command_runner.go:130] > # 	"FSETID",
	I1006 14:21:49.275004  649678 command_runner.go:130] > # 	"FOWNER",
	I1006 14:21:49.275008  649678 command_runner.go:130] > # 	"SETGID",
	I1006 14:21:49.275026  649678 command_runner.go:130] > # 	"SETUID",
	I1006 14:21:49.275033  649678 command_runner.go:130] > # 	"SETPCAP",
	I1006 14:21:49.275037  649678 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1006 14:21:49.275040  649678 command_runner.go:130] > # 	"KILL",
	I1006 14:21:49.275043  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275051  649678 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1006 14:21:49.275059  649678 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1006 14:21:49.275064  649678 command_runner.go:130] > # add_inheritable_capabilities = false
	I1006 14:21:49.275071  649678 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1006 14:21:49.275077  649678 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1006 14:21:49.275083  649678 command_runner.go:130] > default_sysctls = [
	I1006 14:21:49.275087  649678 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1006 14:21:49.275090  649678 command_runner.go:130] > ]
	I1006 14:21:49.275096  649678 command_runner.go:130] > # List of devices on the host that a
	I1006 14:21:49.275104  649678 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1006 14:21:49.275109  649678 command_runner.go:130] > # allowed_devices = [
	I1006 14:21:49.275122  649678 command_runner.go:130] > # 	"/dev/fuse",
	I1006 14:21:49.275128  649678 command_runner.go:130] > # 	"/dev/net/tun",
	I1006 14:21:49.275132  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275136  649678 command_runner.go:130] > # List of additional devices. specified as
	I1006 14:21:49.275146  649678 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1006 14:21:49.275151  649678 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1006 14:21:49.275156  649678 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1006 14:21:49.275162  649678 command_runner.go:130] > # additional_devices = [
	I1006 14:21:49.275166  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275170  649678 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1006 14:21:49.275176  649678 command_runner.go:130] > # cdi_spec_dirs = [
	I1006 14:21:49.275180  649678 command_runner.go:130] > # 	"/etc/cdi",
	I1006 14:21:49.275184  649678 command_runner.go:130] > # 	"/var/run/cdi",
	I1006 14:21:49.275189  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275195  649678 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1006 14:21:49.275216  649678 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1006 14:21:49.275225  649678 command_runner.go:130] > # Defaults to false.
	I1006 14:21:49.275239  649678 command_runner.go:130] > # device_ownership_from_security_context = false
	I1006 14:21:49.275249  649678 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1006 14:21:49.275255  649678 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1006 14:21:49.275262  649678 command_runner.go:130] > # hooks_dir = [
	I1006 14:21:49.275267  649678 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1006 14:21:49.275273  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275278  649678 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1006 14:21:49.275284  649678 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1006 14:21:49.275292  649678 command_runner.go:130] > # its default mounts from the following two files:
	I1006 14:21:49.275295  649678 command_runner.go:130] > #
	I1006 14:21:49.275300  649678 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1006 14:21:49.275309  649678 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1006 14:21:49.275315  649678 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1006 14:21:49.275328  649678 command_runner.go:130] > #
	I1006 14:21:49.275338  649678 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1006 14:21:49.275345  649678 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1006 14:21:49.275353  649678 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1006 14:21:49.275358  649678 command_runner.go:130] > #      only add mounts it finds in this file.
	I1006 14:21:49.275364  649678 command_runner.go:130] > #
	I1006 14:21:49.275370  649678 command_runner.go:130] > # default_mounts_file = ""
	I1006 14:21:49.275378  649678 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1006 14:21:49.275385  649678 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1006 14:21:49.275391  649678 command_runner.go:130] > # pids_limit = -1
	I1006 14:21:49.275398  649678 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1006 14:21:49.275406  649678 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1006 14:21:49.275412  649678 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1006 14:21:49.275420  649678 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1006 14:21:49.275426  649678 command_runner.go:130] > # log_size_max = -1
	I1006 14:21:49.275433  649678 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1006 14:21:49.275439  649678 command_runner.go:130] > # log_to_journald = false
	I1006 14:21:49.275445  649678 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1006 14:21:49.275452  649678 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1006 14:21:49.275457  649678 command_runner.go:130] > # Path to directory for container attach sockets.
	I1006 14:21:49.275463  649678 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1006 14:21:49.275467  649678 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1006 14:21:49.275474  649678 command_runner.go:130] > # bind_mount_prefix = ""
	I1006 14:21:49.275479  649678 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1006 14:21:49.275485  649678 command_runner.go:130] > # read_only = false
	I1006 14:21:49.275491  649678 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1006 14:21:49.275497  649678 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1006 14:21:49.275504  649678 command_runner.go:130] > # live configuration reload.
	I1006 14:21:49.275508  649678 command_runner.go:130] > # log_level = "info"
	I1006 14:21:49.275513  649678 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1006 14:21:49.275521  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.275525  649678 command_runner.go:130] > # log_filter = ""
	I1006 14:21:49.275530  649678 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1006 14:21:49.275542  649678 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1006 14:21:49.275549  649678 command_runner.go:130] > # separated by comma.
	I1006 14:21:49.275557  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275563  649678 command_runner.go:130] > # uid_mappings = ""
	I1006 14:21:49.275569  649678 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1006 14:21:49.275577  649678 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1006 14:21:49.275585  649678 command_runner.go:130] > # separated by comma.
	I1006 14:21:49.275594  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275598  649678 command_runner.go:130] > # gid_mappings = ""
	I1006 14:21:49.275606  649678 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1006 14:21:49.275614  649678 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1006 14:21:49.275621  649678 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1006 14:21:49.275630  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275634  649678 command_runner.go:130] > # minimum_mappable_uid = -1
	I1006 14:21:49.275640  649678 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1006 14:21:49.275648  649678 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1006 14:21:49.275654  649678 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1006 14:21:49.275664  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275668  649678 command_runner.go:130] > # minimum_mappable_gid = -1
	I1006 14:21:49.275676  649678 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1006 14:21:49.275683  649678 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1006 14:21:49.275690  649678 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1006 14:21:49.275694  649678 command_runner.go:130] > # ctr_stop_timeout = 30
	I1006 14:21:49.275700  649678 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1006 14:21:49.275706  649678 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1006 14:21:49.275711  649678 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1006 14:21:49.275718  649678 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1006 14:21:49.275722  649678 command_runner.go:130] > # drop_infra_ctr = true
	I1006 14:21:49.275731  649678 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1006 14:21:49.275736  649678 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1006 14:21:49.275746  649678 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1006 14:21:49.275752  649678 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1006 14:21:49.275759  649678 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1006 14:21:49.275772  649678 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1006 14:21:49.275778  649678 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1006 14:21:49.275786  649678 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1006 14:21:49.275790  649678 command_runner.go:130] > # shared_cpuset = ""
	I1006 14:21:49.275800  649678 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1006 14:21:49.275805  649678 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1006 14:21:49.275811  649678 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1006 14:21:49.275817  649678 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1006 14:21:49.275824  649678 command_runner.go:130] > # pinns_path = ""
	I1006 14:21:49.275829  649678 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1006 14:21:49.275838  649678 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1006 14:21:49.275842  649678 command_runner.go:130] > # enable_criu_support = true
	I1006 14:21:49.275849  649678 command_runner.go:130] > # Enable/disable the generation of the container,
	I1006 14:21:49.275855  649678 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1006 14:21:49.275859  649678 command_runner.go:130] > # enable_pod_events = false
	I1006 14:21:49.275865  649678 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1006 14:21:49.275872  649678 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1006 14:21:49.275876  649678 command_runner.go:130] > # default_runtime = "crun"
	I1006 14:21:49.275880  649678 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1006 14:21:49.275887  649678 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1006 14:21:49.275898  649678 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1006 14:21:49.275906  649678 command_runner.go:130] > # creation as a file is not desired either.
	I1006 14:21:49.275914  649678 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1006 14:21:49.275921  649678 command_runner.go:130] > # the hostname is being managed dynamically.
	I1006 14:21:49.275925  649678 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1006 14:21:49.275930  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275936  649678 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1006 14:21:49.275945  649678 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1006 14:21:49.275951  649678 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1006 14:21:49.275955  649678 command_runner.go:130] > # Each entry in the table should follow the format:
	I1006 14:21:49.275961  649678 command_runner.go:130] > #
	I1006 14:21:49.275965  649678 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1006 14:21:49.275969  649678 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1006 14:21:49.275980  649678 command_runner.go:130] > # runtime_type = "oci"
	I1006 14:21:49.275988  649678 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1006 14:21:49.275993  649678 command_runner.go:130] > # inherit_default_runtime = false
	I1006 14:21:49.275997  649678 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1006 14:21:49.276002  649678 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1006 14:21:49.276009  649678 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1006 14:21:49.276013  649678 command_runner.go:130] > # monitor_env = []
	I1006 14:21:49.276020  649678 command_runner.go:130] > # privileged_without_host_devices = false
	I1006 14:21:49.276024  649678 command_runner.go:130] > # allowed_annotations = []
	I1006 14:21:49.276029  649678 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1006 14:21:49.276035  649678 command_runner.go:130] > # no_sync_log = false
	I1006 14:21:49.276039  649678 command_runner.go:130] > # default_annotations = {}
	I1006 14:21:49.276044  649678 command_runner.go:130] > # stream_websockets = false
	I1006 14:21:49.276052  649678 command_runner.go:130] > # seccomp_profile = ""
	I1006 14:21:49.276074  649678 command_runner.go:130] > # Where:
	I1006 14:21:49.276087  649678 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1006 14:21:49.276100  649678 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1006 14:21:49.276111  649678 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1006 14:21:49.276124  649678 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1006 14:21:49.276128  649678 command_runner.go:130] > #   in $PATH.
	I1006 14:21:49.276137  649678 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1006 14:21:49.276141  649678 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1006 14:21:49.276149  649678 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1006 14:21:49.276153  649678 command_runner.go:130] > #   state.
	I1006 14:21:49.276159  649678 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1006 14:21:49.276165  649678 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1006 14:21:49.276173  649678 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1006 14:21:49.276179  649678 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1006 14:21:49.276186  649678 command_runner.go:130] > #   the values from the default runtime on load time.
	I1006 14:21:49.276193  649678 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1006 14:21:49.276200  649678 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1006 14:21:49.276242  649678 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1006 14:21:49.276258  649678 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1006 14:21:49.276269  649678 command_runner.go:130] > #   The currently recognized values are:
	I1006 14:21:49.276276  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1006 14:21:49.276286  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1006 14:21:49.276294  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1006 14:21:49.276300  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1006 14:21:49.276308  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1006 14:21:49.276314  649678 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1006 14:21:49.276323  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1006 14:21:49.276330  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1006 14:21:49.276338  649678 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1006 14:21:49.276344  649678 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1006 14:21:49.276353  649678 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1006 14:21:49.276359  649678 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1006 14:21:49.276370  649678 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1006 14:21:49.276380  649678 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1006 14:21:49.276386  649678 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1006 14:21:49.276396  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1006 14:21:49.276402  649678 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1006 14:21:49.276409  649678 command_runner.go:130] > #   deprecated option "conmon".
	I1006 14:21:49.276416  649678 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1006 14:21:49.276423  649678 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1006 14:21:49.276429  649678 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1006 14:21:49.276437  649678 command_runner.go:130] > #   should be moved to the container's cgroup
	I1006 14:21:49.276444  649678 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1006 14:21:49.276451  649678 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1006 14:21:49.276459  649678 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1006 14:21:49.276465  649678 command_runner.go:130] > #   conmon-rs by using:
	I1006 14:21:49.276472  649678 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1006 14:21:49.276481  649678 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1006 14:21:49.276488  649678 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1006 14:21:49.276494  649678 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1006 14:21:49.276502  649678 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1006 14:21:49.276509  649678 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1006 14:21:49.276519  649678 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1006 14:21:49.276524  649678 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1006 14:21:49.276534  649678 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1006 14:21:49.276543  649678 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1006 14:21:49.276551  649678 command_runner.go:130] > #   when a machine crash happens.
	I1006 14:21:49.276558  649678 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1006 14:21:49.276568  649678 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1006 14:21:49.276576  649678 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1006 14:21:49.276583  649678 command_runner.go:130] > #   seccomp profile for the runtime.
	I1006 14:21:49.276589  649678 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1006 14:21:49.276598  649678 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1006 14:21:49.276601  649678 command_runner.go:130] > #
	I1006 14:21:49.276605  649678 command_runner.go:130] > # Using the seccomp notifier feature:
	I1006 14:21:49.276610  649678 command_runner.go:130] > #
	I1006 14:21:49.276617  649678 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1006 14:21:49.276626  649678 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1006 14:21:49.276629  649678 command_runner.go:130] > #
	I1006 14:21:49.276635  649678 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1006 14:21:49.276643  649678 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1006 14:21:49.276646  649678 command_runner.go:130] > #
	I1006 14:21:49.276655  649678 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1006 14:21:49.276664  649678 command_runner.go:130] > # feature.
	I1006 14:21:49.276670  649678 command_runner.go:130] > #
	I1006 14:21:49.276684  649678 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1006 14:21:49.276693  649678 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1006 14:21:49.276700  649678 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1006 14:21:49.276708  649678 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1006 14:21:49.276714  649678 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1006 14:21:49.276720  649678 command_runner.go:130] > #
	I1006 14:21:49.276726  649678 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1006 14:21:49.276734  649678 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1006 14:21:49.276737  649678 command_runner.go:130] > #
	I1006 14:21:49.276745  649678 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1006 14:21:49.276765  649678 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1006 14:21:49.276775  649678 command_runner.go:130] > #
	I1006 14:21:49.276785  649678 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1006 14:21:49.276795  649678 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1006 14:21:49.276798  649678 command_runner.go:130] > # limitation.
	I1006 14:21:49.276802  649678 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1006 14:21:49.276807  649678 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1006 14:21:49.276815  649678 command_runner.go:130] > runtime_type = ""
	I1006 14:21:49.276822  649678 command_runner.go:130] > runtime_root = "/run/crun"
	I1006 14:21:49.276833  649678 command_runner.go:130] > inherit_default_runtime = false
	I1006 14:21:49.276841  649678 command_runner.go:130] > runtime_config_path = ""
	I1006 14:21:49.276851  649678 command_runner.go:130] > container_min_memory = ""
	I1006 14:21:49.276860  649678 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1006 14:21:49.276871  649678 command_runner.go:130] > monitor_cgroup = "pod"
	I1006 14:21:49.276877  649678 command_runner.go:130] > monitor_exec_cgroup = ""
	I1006 14:21:49.276883  649678 command_runner.go:130] > allowed_annotations = [
	I1006 14:21:49.276890  649678 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1006 14:21:49.276896  649678 command_runner.go:130] > ]
	I1006 14:21:49.276902  649678 command_runner.go:130] > privileged_without_host_devices = false
	I1006 14:21:49.276909  649678 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1006 14:21:49.276916  649678 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1006 14:21:49.276922  649678 command_runner.go:130] > runtime_type = ""
	I1006 14:21:49.276929  649678 command_runner.go:130] > runtime_root = "/run/runc"
	I1006 14:21:49.276936  649678 command_runner.go:130] > inherit_default_runtime = false
	I1006 14:21:49.276946  649678 command_runner.go:130] > runtime_config_path = ""
	I1006 14:21:49.276954  649678 command_runner.go:130] > container_min_memory = ""
	I1006 14:21:49.276967  649678 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1006 14:21:49.276978  649678 command_runner.go:130] > monitor_cgroup = "pod"
	I1006 14:21:49.276984  649678 command_runner.go:130] > monitor_exec_cgroup = ""
	I1006 14:21:49.276991  649678 command_runner.go:130] > privileged_without_host_devices = false
	I1006 14:21:49.276998  649678 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1006 14:21:49.277005  649678 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1006 14:21:49.277012  649678 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1006 14:21:49.277036  649678 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1006 14:21:49.277057  649678 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1006 14:21:49.277077  649678 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1006 14:21:49.277093  649678 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1006 14:21:49.277104  649678 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1006 14:21:49.277125  649678 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1006 14:21:49.277141  649678 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1006 14:21:49.277151  649678 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1006 14:21:49.277167  649678 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1006 14:21:49.277177  649678 command_runner.go:130] > # Example:
	I1006 14:21:49.277189  649678 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1006 14:21:49.277201  649678 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1006 14:21:49.277225  649678 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1006 14:21:49.277238  649678 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1006 14:21:49.277249  649678 command_runner.go:130] > # cpuset = "0-1"
	I1006 14:21:49.277260  649678 command_runner.go:130] > # cpushares = "5"
	I1006 14:21:49.277270  649678 command_runner.go:130] > # cpuquota = "1000"
	I1006 14:21:49.277281  649678 command_runner.go:130] > # cpuperiod = "100000"
	I1006 14:21:49.277292  649678 command_runner.go:130] > # cpulimit = "35"
	I1006 14:21:49.277300  649678 command_runner.go:130] > # Where:
	I1006 14:21:49.277307  649678 command_runner.go:130] > # The workload name is workload-type.
	I1006 14:21:49.277323  649678 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1006 14:21:49.277336  649678 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1006 14:21:49.277349  649678 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1006 14:21:49.277366  649678 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1006 14:21:49.277381  649678 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1006 14:21:49.277393  649678 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1006 14:21:49.277406  649678 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1006 14:21:49.277416  649678 command_runner.go:130] > # Default value is set to true
	I1006 14:21:49.277427  649678 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1006 14:21:49.277441  649678 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1006 14:21:49.277453  649678 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1006 14:21:49.277465  649678 command_runner.go:130] > # Default value is set to 'false'
	I1006 14:21:49.277479  649678 command_runner.go:130] > # disable_hostport_mapping = false
	I1006 14:21:49.277492  649678 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1006 14:21:49.277513  649678 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1006 14:21:49.277521  649678 command_runner.go:130] > # timezone = ""
	I1006 14:21:49.277531  649678 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1006 14:21:49.277536  649678 command_runner.go:130] > #
	I1006 14:21:49.277547  649678 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1006 14:21:49.277557  649678 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1006 14:21:49.277565  649678 command_runner.go:130] > [crio.image]
	I1006 14:21:49.277578  649678 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1006 14:21:49.277589  649678 command_runner.go:130] > # default_transport = "docker://"
	I1006 14:21:49.277603  649678 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1006 14:21:49.277617  649678 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1006 14:21:49.277627  649678 command_runner.go:130] > # global_auth_file = ""
	I1006 14:21:49.277652  649678 command_runner.go:130] > # The image used to instantiate infra containers.
	I1006 14:21:49.277665  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.277675  649678 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1006 14:21:49.277690  649678 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1006 14:21:49.277704  649678 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1006 14:21:49.277715  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.277730  649678 command_runner.go:130] > # pause_image_auth_file = ""
	I1006 14:21:49.277741  649678 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1006 14:21:49.277755  649678 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1006 14:21:49.277770  649678 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1006 14:21:49.277785  649678 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1006 14:21:49.277796  649678 command_runner.go:130] > # pause_command = "/pause"
	I1006 14:21:49.277811  649678 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1006 14:21:49.277824  649678 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1006 14:21:49.277838  649678 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1006 14:21:49.277851  649678 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1006 14:21:49.277864  649678 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1006 14:21:49.277879  649678 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1006 14:21:49.277889  649678 command_runner.go:130] > # pinned_images = [
	I1006 14:21:49.277904  649678 command_runner.go:130] > # ]
	I1006 14:21:49.277918  649678 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1006 14:21:49.277929  649678 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1006 14:21:49.277943  649678 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1006 14:21:49.277957  649678 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1006 14:21:49.277969  649678 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1006 14:21:49.277982  649678 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1006 14:21:49.277994  649678 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1006 14:21:49.278013  649678 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1006 14:21:49.278025  649678 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1006 14:21:49.278042  649678 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1006 14:21:49.278056  649678 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1006 14:21:49.278069  649678 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1006 14:21:49.278083  649678 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1006 14:21:49.278099  649678 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1006 14:21:49.278109  649678 command_runner.go:130] > # changing them here.
	I1006 14:21:49.278127  649678 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1006 14:21:49.278138  649678 command_runner.go:130] > # insecure_registries = [
	I1006 14:21:49.278148  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278163  649678 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1006 14:21:49.278181  649678 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1006 14:21:49.278192  649678 command_runner.go:130] > # image_volumes = "mkdir"
	I1006 14:21:49.278214  649678 command_runner.go:130] > # Temporary directory to use for storing big files
	I1006 14:21:49.278227  649678 command_runner.go:130] > # big_files_temporary_dir = ""
	I1006 14:21:49.278237  649678 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1006 14:21:49.278253  649678 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1006 14:21:49.278265  649678 command_runner.go:130] > # auto_reload_registries = false
	I1006 14:21:49.278278  649678 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1006 14:21:49.278294  649678 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1006 14:21:49.278307  649678 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1006 14:21:49.278317  649678 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1006 14:21:49.278329  649678 command_runner.go:130] > # The mode of short name resolution.
	I1006 14:21:49.278343  649678 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1006 14:21:49.278364  649678 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1006 14:21:49.278377  649678 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1006 14:21:49.278389  649678 command_runner.go:130] > # short_name_mode = "enforcing"
	I1006 14:21:49.278403  649678 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1006 14:21:49.278414  649678 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1006 14:21:49.278425  649678 command_runner.go:130] > # oci_artifact_mount_support = true
	I1006 14:21:49.278440  649678 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1006 14:21:49.278450  649678 command_runner.go:130] > # CNI plugins.
	I1006 14:21:49.278460  649678 command_runner.go:130] > [crio.network]
	I1006 14:21:49.278474  649678 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1006 14:21:49.278486  649678 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1006 14:21:49.278497  649678 command_runner.go:130] > # cni_default_network = ""
	I1006 14:21:49.278508  649678 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1006 14:21:49.278519  649678 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1006 14:21:49.278532  649678 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1006 14:21:49.278543  649678 command_runner.go:130] > # plugin_dirs = [
	I1006 14:21:49.278554  649678 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1006 14:21:49.278563  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278574  649678 command_runner.go:130] > # List of included pod metrics.
	I1006 14:21:49.278586  649678 command_runner.go:130] > # included_pod_metrics = [
	I1006 14:21:49.278594  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278605  649678 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1006 14:21:49.278615  649678 command_runner.go:130] > [crio.metrics]
	I1006 14:21:49.278627  649678 command_runner.go:130] > # Globally enable or disable metrics support.
	I1006 14:21:49.278639  649678 command_runner.go:130] > # enable_metrics = false
	I1006 14:21:49.278651  649678 command_runner.go:130] > # Specify enabled metrics collectors.
	I1006 14:21:49.278662  649678 command_runner.go:130] > # Per default all metrics are enabled.
	I1006 14:21:49.278676  649678 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1006 14:21:49.278689  649678 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1006 14:21:49.278700  649678 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1006 14:21:49.278712  649678 command_runner.go:130] > # metrics_collectors = [
	I1006 14:21:49.278718  649678 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1006 14:21:49.278727  649678 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1006 14:21:49.278740  649678 command_runner.go:130] > # 	"containers_oom_total",
	I1006 14:21:49.278747  649678 command_runner.go:130] > # 	"processes_defunct",
	I1006 14:21:49.278754  649678 command_runner.go:130] > # 	"operations_total",
	I1006 14:21:49.278761  649678 command_runner.go:130] > # 	"operations_latency_seconds",
	I1006 14:21:49.278769  649678 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1006 14:21:49.278776  649678 command_runner.go:130] > # 	"operations_errors_total",
	I1006 14:21:49.278786  649678 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1006 14:21:49.278798  649678 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1006 14:21:49.278810  649678 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1006 14:21:49.278822  649678 command_runner.go:130] > # 	"image_pulls_success_total",
	I1006 14:21:49.278833  649678 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1006 14:21:49.278844  649678 command_runner.go:130] > # 	"containers_oom_count_total",
	I1006 14:21:49.278856  649678 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1006 14:21:49.278867  649678 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1006 14:21:49.278878  649678 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1006 14:21:49.278886  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278896  649678 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1006 14:21:49.278907  649678 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1006 14:21:49.278916  649678 command_runner.go:130] > # The port on which the metrics server will listen.
	I1006 14:21:49.278927  649678 command_runner.go:130] > # metrics_port = 9090
	I1006 14:21:49.278939  649678 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1006 14:21:49.278950  649678 command_runner.go:130] > # metrics_socket = ""
	I1006 14:21:49.278962  649678 command_runner.go:130] > # The certificate for the secure metrics server.
	I1006 14:21:49.278975  649678 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1006 14:21:49.278986  649678 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1006 14:21:49.278998  649678 command_runner.go:130] > # certificate on any modification event.
	I1006 14:21:49.279009  649678 command_runner.go:130] > # metrics_cert = ""
	I1006 14:21:49.279018  649678 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1006 14:21:49.279031  649678 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1006 14:21:49.279042  649678 command_runner.go:130] > # metrics_key = ""
	I1006 14:21:49.279054  649678 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1006 14:21:49.279065  649678 command_runner.go:130] > [crio.tracing]
	I1006 14:21:49.279078  649678 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1006 14:21:49.279088  649678 command_runner.go:130] > # enable_tracing = false
	I1006 14:21:49.279100  649678 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1006 14:21:49.279118  649678 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1006 14:21:49.279133  649678 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1006 14:21:49.279145  649678 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1006 14:21:49.279155  649678 command_runner.go:130] > # CRI-O NRI configuration.
	I1006 14:21:49.279165  649678 command_runner.go:130] > [crio.nri]
	I1006 14:21:49.279176  649678 command_runner.go:130] > # Globally enable or disable NRI.
	I1006 14:21:49.279185  649678 command_runner.go:130] > # enable_nri = true
	I1006 14:21:49.279195  649678 command_runner.go:130] > # NRI socket to listen on.
	I1006 14:21:49.279220  649678 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1006 14:21:49.279232  649678 command_runner.go:130] > # NRI plugin directory to use.
	I1006 14:21:49.279239  649678 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1006 14:21:49.279251  649678 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1006 14:21:49.279263  649678 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1006 14:21:49.279276  649678 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1006 14:21:49.279348  649678 command_runner.go:130] > # nri_disable_connections = false
	I1006 14:21:49.279363  649678 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1006 14:21:49.279371  649678 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1006 14:21:49.279381  649678 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1006 14:21:49.279393  649678 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1006 14:21:49.279404  649678 command_runner.go:130] > # NRI default validator configuration.
	I1006 14:21:49.279420  649678 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1006 14:21:49.279434  649678 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1006 14:21:49.279445  649678 command_runner.go:130] > # can be restricted/rejected:
	I1006 14:21:49.279455  649678 command_runner.go:130] > # - OCI hook injection
	I1006 14:21:49.279467  649678 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1006 14:21:49.279479  649678 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1006 14:21:49.279488  649678 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1006 14:21:49.279499  649678 command_runner.go:130] > # - adjustment of linux namespaces
	I1006 14:21:49.279513  649678 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1006 14:21:49.279528  649678 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1006 14:21:49.279541  649678 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1006 14:21:49.279550  649678 command_runner.go:130] > #
	I1006 14:21:49.279561  649678 command_runner.go:130] > # [crio.nri.default_validator]
	I1006 14:21:49.279574  649678 command_runner.go:130] > # nri_enable_default_validator = false
	I1006 14:21:49.279587  649678 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1006 14:21:49.279600  649678 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1006 14:21:49.279613  649678 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1006 14:21:49.279626  649678 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1006 14:21:49.279636  649678 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1006 14:21:49.279646  649678 command_runner.go:130] > # nri_validator_required_plugins = [
	I1006 14:21:49.279656  649678 command_runner.go:130] > # ]
	I1006 14:21:49.279668  649678 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1006 14:21:49.279681  649678 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1006 14:21:49.279691  649678 command_runner.go:130] > [crio.stats]
	I1006 14:21:49.279704  649678 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1006 14:21:49.279717  649678 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1006 14:21:49.279728  649678 command_runner.go:130] > # stats_collection_period = 0
	I1006 14:21:49.279739  649678 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1006 14:21:49.279753  649678 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1006 14:21:49.279764  649678 command_runner.go:130] > # collection_period = 0
	I1006 14:21:49.279811  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258239123Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1006 14:21:49.279828  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258265766Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1006 14:21:49.279842  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258283938Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1006 14:21:49.279857  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.25830256Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1006 14:21:49.279875  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258357499Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:49.279892  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258517334Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1006 14:21:49.279912  649678 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1006 14:21:49.280045  649678 cni.go:84] Creating CNI manager for ""
	I1006 14:21:49.280059  649678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:21:49.280078  649678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:21:49.280122  649678 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-135520 NodeName:functional-135520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:21:49.280303  649678 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-135520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:21:49.280384  649678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:21:49.288800  649678 command_runner.go:130] > kubeadm
	I1006 14:21:49.288826  649678 command_runner.go:130] > kubectl
	I1006 14:21:49.288833  649678 command_runner.go:130] > kubelet
	I1006 14:21:49.288864  649678 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:21:49.288912  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:21:49.296476  649678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 14:21:49.308883  649678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:21:49.321172  649678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1006 14:21:49.333376  649678 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:21:49.336963  649678 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1006 14:21:49.337019  649678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:49.424422  649678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:21:49.437476  649678 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520 for IP: 192.168.49.2
	I1006 14:21:49.437505  649678 certs.go:195] generating shared ca certs ...
	I1006 14:21:49.437527  649678 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:49.437678  649678 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:21:49.437730  649678 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:21:49.437748  649678 certs.go:257] generating profile certs ...
	I1006 14:21:49.437847  649678 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key
	I1006 14:21:49.437896  649678 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key.72a46e8e
	I1006 14:21:49.437936  649678 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key
	I1006 14:21:49.437949  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:21:49.437963  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:21:49.437984  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:21:49.438003  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:21:49.438018  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:21:49.438035  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:21:49.438049  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:21:49.438064  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:21:49.438123  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:21:49.438160  649678 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:21:49.438171  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:21:49.438196  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:21:49.438246  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:21:49.438271  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:21:49.438316  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:21:49.438344  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.438359  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.438381  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.439032  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:21:49.456437  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:21:49.473578  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:21:49.490593  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:21:49.508347  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:21:49.525339  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:21:49.541997  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:21:49.558467  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:21:49.576359  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:21:49.593578  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:21:49.610863  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:21:49.628123  649678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:21:49.640270  649678 ssh_runner.go:195] Run: openssl version
	I1006 14:21:49.646279  649678 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1006 14:21:49.646391  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:21:49.654553  649678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.658110  649678 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.658254  649678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.658303  649678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.692318  649678 command_runner.go:130] > 3ec20f2e
	I1006 14:21:49.692406  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:21:49.700814  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:21:49.709140  649678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.712721  649678 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.712738  649678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.712772  649678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.745663  649678 command_runner.go:130] > b5213941
	I1006 14:21:49.745998  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:21:49.754083  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:21:49.762664  649678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.766415  649678 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.766461  649678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.766502  649678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.800644  649678 command_runner.go:130] > 51391683
	I1006 14:21:49.800985  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:21:49.809049  649678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:21:49.812721  649678 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:21:49.812776  649678 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1006 14:21:49.812784  649678 command_runner.go:130] > Device: 8,1	Inode: 580300      Links: 1
	I1006 14:21:49.812793  649678 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 14:21:49.812800  649678 command_runner.go:130] > Access: 2025-10-06 14:17:42.533320203 +0000
	I1006 14:21:49.812811  649678 command_runner.go:130] > Modify: 2025-10-06 14:13:37.457627952 +0000
	I1006 14:21:49.812819  649678 command_runner.go:130] > Change: 2025-10-06 14:13:37.457627952 +0000
	I1006 14:21:49.812829  649678 command_runner.go:130] >  Birth: 2025-10-06 14:13:37.457627952 +0000
	I1006 14:21:49.812886  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:21:49.846896  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.847277  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:21:49.881096  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.881431  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:21:49.916333  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.916837  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:21:49.951128  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.951323  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:21:49.984919  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.985255  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:21:50.018710  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:50.018987  649678 kubeadm.go:400] StartCluster: {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:50.019061  649678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:21:50.019118  649678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:21:50.047552  649678 cri.go:89] found id: ""
	I1006 14:21:50.047624  649678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:21:50.055103  649678 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1006 14:21:50.055125  649678 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1006 14:21:50.055137  649678 command_runner.go:130] > /var/lib/minikube/etcd:
	I1006 14:21:50.055780  649678 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:21:50.055795  649678 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:21:50.055835  649678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:21:50.063106  649678 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:21:50.063218  649678 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-135520" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.063263  649678 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-626179/kubeconfig needs updating (will repair): [kubeconfig missing "functional-135520" cluster setting kubeconfig missing "functional-135520" context setting]
	I1006 14:21:50.063581  649678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:50.064282  649678 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.064435  649678 kapi.go:59] client config for functional-135520: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:21:50.064874  649678 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 14:21:50.064894  649678 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 14:21:50.064898  649678 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 14:21:50.064902  649678 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 14:21:50.064906  649678 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 14:21:50.064950  649678 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1006 14:21:50.065393  649678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:21:50.072886  649678 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1006 14:21:50.072922  649678 kubeadm.go:601] duration metric: took 17.120794ms to restartPrimaryControlPlane
	I1006 14:21:50.072932  649678 kubeadm.go:402] duration metric: took 53.951913ms to StartCluster
	I1006 14:21:50.072948  649678 settings.go:142] acquiring lock: {Name:mk49b10f71f24d1f54d5c453b3b04e717e9a9100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:50.073763  649678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.074346  649678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:50.074579  649678 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:21:50.074661  649678 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 14:21:50.074799  649678 addons.go:69] Setting storage-provisioner=true in profile "functional-135520"
	I1006 14:21:50.074825  649678 addons.go:238] Setting addon storage-provisioner=true in "functional-135520"
	I1006 14:21:50.074761  649678 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:21:50.074866  649678 addons.go:69] Setting default-storageclass=true in profile "functional-135520"
	I1006 14:21:50.074859  649678 host.go:66] Checking if "functional-135520" exists ...
	I1006 14:21:50.074881  649678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-135520"
	I1006 14:21:50.075174  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:50.075488  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:50.077233  649678 out.go:179] * Verifying Kubernetes components...
	I1006 14:21:50.078370  649678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:50.095495  649678 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.095656  649678 kapi.go:59] client config for functional-135520: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:21:50.095938  649678 addons.go:238] Setting addon default-storageclass=true in "functional-135520"
	I1006 14:21:50.095974  649678 host.go:66] Checking if "functional-135520" exists ...
	I1006 14:21:50.096327  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:50.100068  649678 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:21:50.101767  649678 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.101786  649678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:21:50.101831  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:50.122986  649678 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.123017  649678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:21:50.123083  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:50.128190  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:50.141305  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:50.171892  649678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:21:50.185683  649678 node_ready.go:35] waiting up to 6m0s for node "functional-135520" to be "Ready" ...
	I1006 14:21:50.185842  649678 type.go:168] "Request Body" body=""
	I1006 14:21:50.185906  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:50.186211  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:50.238569  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.250369  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.297302  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.297371  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.297421  649678 retry.go:31] will retry after 341.445316ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.306094  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.306137  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.306156  649678 retry.go:31] will retry after 289.440052ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.596773  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.639555  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.652478  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.652547  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.652572  649678 retry.go:31] will retry after 276.474886ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.686728  649678 type.go:168] "Request Body" body=""
	I1006 14:21:50.686820  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:50.687192  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:50.696244  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.696297  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.696320  649678 retry.go:31] will retry after 208.115159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.904724  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.929427  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.961651  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.961718  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.961741  649678 retry.go:31] will retry after 526.763649ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.984274  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.988765  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.988799  649678 retry.go:31] will retry after 299.40846ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.186119  649678 type.go:168] "Request Body" body=""
	I1006 14:21:51.186232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:51.186600  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:51.288897  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:51.344296  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:51.344362  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.344390  649678 retry.go:31] will retry after 1.255489073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.489635  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:51.542509  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:51.545518  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.545558  649678 retry.go:31] will retry after 1.109395122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.686960  649678 type.go:168] "Request Body" body=""
	I1006 14:21:51.687044  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:51.687429  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:52.186098  649678 type.go:168] "Request Body" body=""
	I1006 14:21:52.186177  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:52.186579  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:52.186647  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:52.600133  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:52.654438  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:52.654496  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.654515  649678 retry.go:31] will retry after 1.609702337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.655551  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:52.686897  649678 type.go:168] "Request Body" body=""
	I1006 14:21:52.686998  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:52.687382  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:52.709517  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:52.709578  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.709602  649678 retry.go:31] will retry after 1.712984533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:53.186162  649678 type.go:168] "Request Body" body=""
	I1006 14:21:53.186283  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:53.186685  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:53.686305  649678 type.go:168] "Request Body" body=""
	I1006 14:21:53.686410  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:53.686778  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:54.186389  649678 type.go:168] "Request Body" body=""
	I1006 14:21:54.186497  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:54.186895  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:54.186974  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:54.265161  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:54.320415  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:54.320465  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.320484  649678 retry.go:31] will retry after 1.901708606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.423753  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:54.478522  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:54.478584  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.478619  649678 retry.go:31] will retry after 1.584586857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.685879  649678 type.go:168] "Request Body" body=""
	I1006 14:21:54.685954  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:54.686309  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:55.185880  649678 type.go:168] "Request Body" body=""
	I1006 14:21:55.185961  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:55.186309  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:55.685969  649678 type.go:168] "Request Body" body=""
	I1006 14:21:55.686071  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:55.686478  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:56.063981  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:56.118717  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:56.118774  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.118807  649678 retry.go:31] will retry after 2.733091815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.185931  649678 type.go:168] "Request Body" body=""
	I1006 14:21:56.186008  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:56.186344  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:56.222525  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:56.276120  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:56.276196  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.276235  649678 retry.go:31] will retry after 1.816128137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.686920  649678 type.go:168] "Request Body" body=""
	I1006 14:21:56.687009  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:56.687408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:56.687471  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:57.186225  649678 type.go:168] "Request Body" body=""
	I1006 14:21:57.186314  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:57.186655  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:57.686516  649678 type.go:168] "Request Body" body=""
	I1006 14:21:57.686601  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:57.686915  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:58.093526  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:58.148989  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:58.149041  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.149066  649678 retry.go:31] will retry after 2.492749577s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.186253  649678 type.go:168] "Request Body" body=""
	I1006 14:21:58.186345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:58.186702  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:58.686540  649678 type.go:168] "Request Body" body=""
	I1006 14:21:58.686625  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:58.686963  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:58.852333  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:58.907770  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:58.907811  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.907831  649678 retry.go:31] will retry after 3.408188619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:59.186242  649678 type.go:168] "Request Body" body=""
	I1006 14:21:59.186325  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:59.186705  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:59.186784  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:59.686631  649678 type.go:168] "Request Body" body=""
	I1006 14:21:59.686729  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:59.687112  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:00.185903  649678 type.go:168] "Request Body" body=""
	I1006 14:22:00.185998  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:00.186365  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:00.642984  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:00.686799  649678 type.go:168] "Request Body" body=""
	I1006 14:22:00.686880  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:00.687243  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:00.698375  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:00.698427  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:00.698448  649678 retry.go:31] will retry after 6.594317937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:01.186036  649678 type.go:168] "Request Body" body=""
	I1006 14:22:01.186143  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:01.186563  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:01.686476  649678 type.go:168] "Request Body" body=""
	I1006 14:22:01.686584  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:01.686981  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:01.687058  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:02.186608  649678 type.go:168] "Request Body" body=""
	I1006 14:22:02.186705  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:02.187061  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:02.316279  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:02.370200  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:02.373358  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:02.373390  649678 retry.go:31] will retry after 5.569612861s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:02.686858  649678 type.go:168] "Request Body" body=""
	I1006 14:22:02.686947  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:02.687350  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:03.185954  649678 type.go:168] "Request Body" body=""
	I1006 14:22:03.186035  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:03.186451  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:03.686069  649678 type.go:168] "Request Body" body=""
	I1006 14:22:03.686185  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:03.686679  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:04.186146  649678 type.go:168] "Request Body" body=""
	I1006 14:22:04.186265  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:04.186682  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:04.186759  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:04.686312  649678 type.go:168] "Request Body" body=""
	I1006 14:22:04.686448  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:04.686778  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:05.186355  649678 type.go:168] "Request Body" body=""
	I1006 14:22:05.186442  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:05.186804  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:05.686470  649678 type.go:168] "Request Body" body=""
	I1006 14:22:05.686548  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:05.686892  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:06.186409  649678 type.go:168] "Request Body" body=""
	I1006 14:22:06.186493  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:06.186841  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:06.186906  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:06.686653  649678 type.go:168] "Request Body" body=""
	I1006 14:22:06.686731  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:06.687077  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:07.186430  649678 type.go:168] "Request Body" body=""
	I1006 14:22:07.186515  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:07.186850  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:07.293062  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:07.347879  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:07.347938  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:07.347958  649678 retry.go:31] will retry after 11.599769479s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:07.686422  649678 type.go:168] "Request Body" body=""
	I1006 14:22:07.686519  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:07.686919  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:07.943325  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:07.994639  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:07.997627  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:07.997659  649678 retry.go:31] will retry after 6.982471195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:08.186017  649678 type.go:168] "Request Body" body=""
	I1006 14:22:08.186095  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:08.186523  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:08.686113  649678 type.go:168] "Request Body" body=""
	I1006 14:22:08.686234  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:08.686617  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:08.686693  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:09.186236  649678 type.go:168] "Request Body" body=""
	I1006 14:22:09.186345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:09.186717  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:09.686283  649678 type.go:168] "Request Body" body=""
	I1006 14:22:09.686365  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:09.686759  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:10.186558  649678 type.go:168] "Request Body" body=""
	I1006 14:22:10.186657  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:10.187046  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:10.686665  649678 type.go:168] "Request Body" body=""
	I1006 14:22:10.686743  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:10.687116  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:10.687244  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:11.186799  649678 type.go:168] "Request Body" body=""
	I1006 14:22:11.186892  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:11.187296  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:11.686074  649678 type.go:168] "Request Body" body=""
	I1006 14:22:11.686224  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:11.686586  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:12.186151  649678 type.go:168] "Request Body" body=""
	I1006 14:22:12.186305  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:12.186696  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:12.686260  649678 type.go:168] "Request Body" body=""
	I1006 14:22:12.686345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:12.686706  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:13.186307  649678 type.go:168] "Request Body" body=""
	I1006 14:22:13.186418  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:13.186788  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:13.186857  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:13.686381  649678 type.go:168] "Request Body" body=""
	I1006 14:22:13.686488  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:13.686854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:14.186497  649678 type.go:168] "Request Body" body=""
	I1006 14:22:14.186592  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:14.186941  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:14.686598  649678 type.go:168] "Request Body" body=""
	I1006 14:22:14.686682  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:14.687029  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:14.980397  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:15.034191  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:15.034263  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:15.034288  649678 retry.go:31] will retry after 12.004605903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:15.186550  649678 type.go:168] "Request Body" body=""
	I1006 14:22:15.186633  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:15.187020  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:15.187102  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:15.686717  649678 type.go:168] "Request Body" body=""
	I1006 14:22:15.686812  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:15.687196  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:16.186809  649678 type.go:168] "Request Body" body=""
	I1006 14:22:16.186884  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:16.187256  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:16.686013  649678 type.go:168] "Request Body" body=""
	I1006 14:22:16.686098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:16.686488  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:17.186068  649678 type.go:168] "Request Body" body=""
	I1006 14:22:17.186146  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:17.186573  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:17.686133  649678 type.go:168] "Request Body" body=""
	I1006 14:22:17.686253  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:17.686622  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:17.686699  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:18.186192  649678 type.go:168] "Request Body" body=""
	I1006 14:22:18.186295  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:18.186693  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:18.686281  649678 type.go:168] "Request Body" body=""
	I1006 14:22:18.686358  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:18.686685  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:18.948057  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:19.002723  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:19.002770  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:19.002791  649678 retry.go:31] will retry after 9.663618433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:19.186105  649678 type.go:168] "Request Body" body=""
	I1006 14:22:19.186250  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:19.186659  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:19.686518  649678 type.go:168] "Request Body" body=""
	I1006 14:22:19.686605  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:19.686939  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:19.687009  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:20.186860  649678 type.go:168] "Request Body" body=""
	I1006 14:22:20.186965  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:20.187367  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:20.686167  649678 type.go:168] "Request Body" body=""
	I1006 14:22:20.686275  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:20.686635  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:21.186460  649678 type.go:168] "Request Body" body=""
	I1006 14:22:21.186548  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:21.186942  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:21.686821  649678 type.go:168] "Request Body" body=""
	I1006 14:22:21.686902  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:21.687332  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:21.687397  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:22.186083  649678 type.go:168] "Request Body" body=""
	I1006 14:22:22.186166  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:22.186569  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:22.686397  649678 type.go:168] "Request Body" body=""
	I1006 14:22:22.686491  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:22.686903  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:23.186781  649678 type.go:168] "Request Body" body=""
	I1006 14:22:23.186870  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:23.187268  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:23.686042  649678 type.go:168] "Request Body" body=""
	I1006 14:22:23.686129  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:23.686575  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:24.186356  649678 type.go:168] "Request Body" body=""
	I1006 14:22:24.186489  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:24.186921  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:24.187013  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:24.686802  649678 type.go:168] "Request Body" body=""
	I1006 14:22:24.686904  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:24.687313  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:25.186100  649678 type.go:168] "Request Body" body=""
	I1006 14:22:25.186254  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:25.186644  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:25.686394  649678 type.go:168] "Request Body" body=""
	I1006 14:22:25.686478  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:25.686854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:26.186709  649678 type.go:168] "Request Body" body=""
	I1006 14:22:26.186843  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:26.187291  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:26.187357  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:26.686108  649678 type.go:168] "Request Body" body=""
	I1006 14:22:26.686232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:26.686608  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:27.039059  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:27.094007  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:27.097496  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:27.097534  649678 retry.go:31] will retry after 22.614868096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:27.186847  649678 type.go:168] "Request Body" body=""
	I1006 14:22:27.186925  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:27.187319  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:27.686152  649678 type.go:168] "Request Body" body=""
	I1006 14:22:27.686302  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:27.686651  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:28.186562  649678 type.go:168] "Request Body" body=""
	I1006 14:22:28.186655  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:28.187109  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:28.666677  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:28.686315  649678 type.go:168] "Request Body" body=""
	I1006 14:22:28.686424  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:28.686765  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:28.686846  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:28.722750  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:28.722794  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:28.722814  649678 retry.go:31] will retry after 11.553901016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:29.186360  649678 type.go:168] "Request Body" body=""
	I1006 14:22:29.186463  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:29.186854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:29.686594  649678 type.go:168] "Request Body" body=""
	I1006 14:22:29.686674  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:29.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:30.186847  649678 type.go:168] "Request Body" body=""
	I1006 14:22:30.186978  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:30.187394  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:30.685980  649678 type.go:168] "Request Body" body=""
	I1006 14:22:30.686063  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:30.686514  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:31.186103  649678 type.go:168] "Request Body" body=""
	I1006 14:22:31.186273  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:31.186671  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:31.186735  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:31.686585  649678 type.go:168] "Request Body" body=""
	I1006 14:22:31.686699  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:31.687091  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:32.186757  649678 type.go:168] "Request Body" body=""
	I1006 14:22:32.186864  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:32.187311  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:32.685887  649678 type.go:168] "Request Body" body=""
	I1006 14:22:32.685973  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:32.686388  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:33.186057  649678 type.go:168] "Request Body" body=""
	I1006 14:22:33.186156  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:33.186557  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:33.686144  649678 type.go:168] "Request Body" body=""
	I1006 14:22:33.686262  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:33.686648  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:33.686721  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:34.186259  649678 type.go:168] "Request Body" body=""
	I1006 14:22:34.186354  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:34.186737  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:34.686419  649678 type.go:168] "Request Body" body=""
	I1006 14:22:34.686498  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:34.686871  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:35.186497  649678 type.go:168] "Request Body" body=""
	I1006 14:22:35.186603  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:35.186980  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:35.686662  649678 type.go:168] "Request Body" body=""
	I1006 14:22:35.686763  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:35.687122  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:35.687197  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:36.186754  649678 type.go:168] "Request Body" body=""
	I1006 14:22:36.186848  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:36.187316  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:36.686164  649678 type.go:168] "Request Body" body=""
	I1006 14:22:36.686314  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:36.686722  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:37.186321  649678 type.go:168] "Request Body" body=""
	I1006 14:22:37.186420  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:37.186775  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:37.686633  649678 type.go:168] "Request Body" body=""
	I1006 14:22:37.686715  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:37.687101  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:38.185900  649678 type.go:168] "Request Body" body=""
	I1006 14:22:38.185994  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:38.186391  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:38.186465  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:38.686198  649678 type.go:168] "Request Body" body=""
	I1006 14:22:38.686309  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:38.686708  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:39.186526  649678 type.go:168] "Request Body" body=""
	I1006 14:22:39.186655  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:39.187049  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:39.685917  649678 type.go:168] "Request Body" body=""
	I1006 14:22:39.686005  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:39.686446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:40.186230  649678 type.go:168] "Request Body" body=""
	I1006 14:22:40.186337  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:40.186733  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:40.186801  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:40.276916  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:40.331801  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:40.335179  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:40.335232  649678 retry.go:31] will retry after 39.41387573s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:40.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:22:40.686899  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:40.687303  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:41.186091  649678 type.go:168] "Request Body" body=""
	I1006 14:22:41.186200  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:41.186603  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:41.686526  649678 type.go:168] "Request Body" body=""
	I1006 14:22:41.686626  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:41.687010  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:42.186887  649678 type.go:168] "Request Body" body=""
	I1006 14:22:42.186964  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:42.187345  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:42.187421  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:42.686150  649678 type.go:168] "Request Body" body=""
	I1006 14:22:42.686267  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:42.686658  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:43.186527  649678 type.go:168] "Request Body" body=""
	I1006 14:22:43.186614  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:43.186999  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:43.686820  649678 type.go:168] "Request Body" body=""
	I1006 14:22:43.686909  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:43.687318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:44.186096  649678 type.go:168] "Request Body" body=""
	I1006 14:22:44.186247  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:44.186640  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:44.686530  649678 type.go:168] "Request Body" body=""
	I1006 14:22:44.686615  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:44.687010  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:44.687087  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:45.186889  649678 type.go:168] "Request Body" body=""
	I1006 14:22:45.186975  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:45.187340  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:45.686094  649678 type.go:168] "Request Body" body=""
	I1006 14:22:45.686177  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:45.686579  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:46.186357  649678 type.go:168] "Request Body" body=""
	I1006 14:22:46.186468  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:46.186826  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:46.686734  649678 type.go:168] "Request Body" body=""
	I1006 14:22:46.686824  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:46.687252  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:46.687331  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:47.186069  649678 type.go:168] "Request Body" body=""
	I1006 14:22:47.186155  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:47.186586  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:47.686023  649678 type.go:168] "Request Body" body=""
	I1006 14:22:47.686126  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:47.686582  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:48.186406  649678 type.go:168] "Request Body" body=""
	I1006 14:22:48.186501  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:48.186908  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:48.686766  649678 type.go:168] "Request Body" body=""
	I1006 14:22:48.686850  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:48.687229  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:49.186033  649678 type.go:168] "Request Body" body=""
	I1006 14:22:49.186123  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:49.186550  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:49.186623  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:49.686385  649678 type.go:168] "Request Body" body=""
	I1006 14:22:49.686504  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:49.686900  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:49.713160  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:49.766183  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:49.769572  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:49.769611  649678 retry.go:31] will retry after 48.442133458s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:50.186500  649678 type.go:168] "Request Body" body=""
	I1006 14:22:50.186594  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:50.186974  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:50.686617  649678 type.go:168] "Request Body" body=""
	I1006 14:22:50.686714  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:50.687119  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:51.186841  649678 type.go:168] "Request Body" body=""
	I1006 14:22:51.186935  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:51.187337  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:51.187405  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:51.686028  649678 type.go:168] "Request Body" body=""
	I1006 14:22:51.686127  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:51.686519  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:52.186126  649678 type.go:168] "Request Body" body=""
	I1006 14:22:52.186243  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:52.186633  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:52.686285  649678 type.go:168] "Request Body" body=""
	I1006 14:22:52.686514  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:52.686906  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:53.186666  649678 type.go:168] "Request Body" body=""
	I1006 14:22:53.186777  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:53.187137  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:53.686806  649678 type.go:168] "Request Body" body=""
	I1006 14:22:53.686890  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:53.687265  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:53.687341  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:54.186883  649678 type.go:168] "Request Body" body=""
	I1006 14:22:54.186965  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:54.187357  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:54.685948  649678 type.go:168] "Request Body" body=""
	I1006 14:22:54.686032  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:54.686415  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:55.186002  649678 type.go:168] "Request Body" body=""
	I1006 14:22:55.186183  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:55.186574  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:55.686131  649678 type.go:168] "Request Body" body=""
	I1006 14:22:55.686232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:55.686601  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:56.186156  649678 type.go:168] "Request Body" body=""
	I1006 14:22:56.186256  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:56.186593  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:56.186664  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:56.686450  649678 type.go:168] "Request Body" body=""
	I1006 14:22:56.686613  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:56.686999  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:57.186661  649678 type.go:168] "Request Body" body=""
	I1006 14:22:57.186772  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:57.187148  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:57.686783  649678 type.go:168] "Request Body" body=""
	I1006 14:22:57.686883  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:57.687277  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:58.185869  649678 type.go:168] "Request Body" body=""
	I1006 14:22:58.185950  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:58.186323  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:58.686036  649678 type.go:168] "Request Body" body=""
	I1006 14:22:58.686125  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:58.686521  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:58.686591  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:59.186311  649678 type.go:168] "Request Body" body=""
	I1006 14:22:59.186404  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:59.186765  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:59.686602  649678 type.go:168] "Request Body" body=""
	I1006 14:22:59.686706  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:59.687089  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:00.186937  649678 type.go:168] "Request Body" body=""
	I1006 14:23:00.187019  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:00.187408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:00.686157  649678 type.go:168] "Request Body" body=""
	I1006 14:23:00.686313  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:00.686729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:00.686803  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:01.186684  649678 type.go:168] "Request Body" body=""
	I1006 14:23:01.186778  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:01.187151  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:01.685976  649678 type.go:168] "Request Body" body=""
	I1006 14:23:01.686057  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:01.686478  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:02.186289  649678 type.go:168] "Request Body" body=""
	I1006 14:23:02.186377  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:02.186737  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:02.686603  649678 type.go:168] "Request Body" body=""
	I1006 14:23:02.686684  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:02.687079  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:02.687190  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:03.185994  649678 type.go:168] "Request Body" body=""
	I1006 14:23:03.186088  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:03.186549  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:03.686032  649678 type.go:168] "Request Body" body=""
	I1006 14:23:03.686132  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:03.686549  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:04.186500  649678 type.go:168] "Request Body" body=""
	I1006 14:23:04.186631  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:04.187174  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:04.685994  649678 type.go:168] "Request Body" body=""
	I1006 14:23:04.686082  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:04.686484  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:05.186312  649678 type.go:168] "Request Body" body=""
	I1006 14:23:05.186407  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:05.186774  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:05.186835  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:05.686657  649678 type.go:168] "Request Body" body=""
	I1006 14:23:05.686791  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:05.687181  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:06.186003  649678 type.go:168] "Request Body" body=""
	I1006 14:23:06.186097  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:06.186563  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:06.686413  649678 type.go:168] "Request Body" body=""
	I1006 14:23:06.686495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:06.686915  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:07.186819  649678 type.go:168] "Request Body" body=""
	I1006 14:23:07.186902  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:07.187335  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:07.187443  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:07.685990  649678 type.go:168] "Request Body" body=""
	I1006 14:23:07.686084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:07.686517  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:08.186341  649678 type.go:168] "Request Body" body=""
	I1006 14:23:08.186420  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:08.186803  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:08.686701  649678 type.go:168] "Request Body" body=""
	I1006 14:23:08.686838  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:08.687297  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:09.186086  649678 type.go:168] "Request Body" body=""
	I1006 14:23:09.186254  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:09.186682  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:09.686607  649678 type.go:168] "Request Body" body=""
	I1006 14:23:09.686743  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:09.687165  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:09.687290  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:10.185924  649678 type.go:168] "Request Body" body=""
	I1006 14:23:10.186016  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:10.186459  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:10.686243  649678 type.go:168] "Request Body" body=""
	I1006 14:23:10.686352  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:10.686735  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:11.186644  649678 type.go:168] "Request Body" body=""
	I1006 14:23:11.186726  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:11.187073  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:11.685855  649678 type.go:168] "Request Body" body=""
	I1006 14:23:11.685945  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:11.686393  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:12.186196  649678 type.go:168] "Request Body" body=""
	I1006 14:23:12.186326  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:12.186700  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:12.186777  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:12.686603  649678 type.go:168] "Request Body" body=""
	I1006 14:23:12.686687  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:12.687185  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:13.186005  649678 type.go:168] "Request Body" body=""
	I1006 14:23:13.186125  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:13.186566  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:13.686384  649678 type.go:168] "Request Body" body=""
	I1006 14:23:13.686489  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:13.686889  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:14.186755  649678 type.go:168] "Request Body" body=""
	I1006 14:23:14.186840  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:14.187235  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:14.187324  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:14.686081  649678 type.go:168] "Request Body" body=""
	I1006 14:23:14.686227  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:14.686581  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:15.186411  649678 type.go:168] "Request Body" body=""
	I1006 14:23:15.186495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:15.186873  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:15.686769  649678 type.go:168] "Request Body" body=""
	I1006 14:23:15.686854  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:15.687247  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:16.186139  649678 type.go:168] "Request Body" body=""
	I1006 14:23:16.186276  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:16.186637  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:16.686871  649678 type.go:168] "Request Body" body=""
	I1006 14:23:16.686955  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:16.687341  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:16.687407  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:17.186133  649678 type.go:168] "Request Body" body=""
	I1006 14:23:17.186292  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:17.186672  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:17.686604  649678 type.go:168] "Request Body" body=""
	I1006 14:23:17.686688  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:17.687115  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:18.185964  649678 type.go:168] "Request Body" body=""
	I1006 14:23:18.186060  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:18.186514  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:18.686315  649678 type.go:168] "Request Body" body=""
	I1006 14:23:18.686410  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:18.686801  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:19.186674  649678 type.go:168] "Request Body" body=""
	I1006 14:23:19.186783  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:19.187188  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:19.187288  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:19.686017  649678 type.go:168] "Request Body" body=""
	I1006 14:23:19.686099  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:19.686535  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:19.749802  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:23:19.804037  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:19.807440  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:19.807591  649678 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:23:20.186477  649678 type.go:168] "Request Body" body=""
	I1006 14:23:20.186600  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:20.186989  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:20.686678  649678 type.go:168] "Request Body" body=""
	I1006 14:23:20.686763  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:20.687137  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:21.186775  649678 type.go:168] "Request Body" body=""
	I1006 14:23:21.186859  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:21.187276  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:21.187355  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:21.686079  649678 type.go:168] "Request Body" body=""
	I1006 14:23:21.686193  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:21.686605  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:22.186165  649678 type.go:168] "Request Body" body=""
	I1006 14:23:22.186276  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:22.186620  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:22.686240  649678 type.go:168] "Request Body" body=""
	I1006 14:23:22.686350  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:22.686763  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:23.186337  649678 type.go:168] "Request Body" body=""
	I1006 14:23:23.186473  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:23.186847  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:23.686573  649678 type.go:168] "Request Body" body=""
	I1006 14:23:23.686658  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:23.687072  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:23.687135  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:24.186778  649678 type.go:168] "Request Body" body=""
	I1006 14:23:24.186877  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:24.187302  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:24.685913  649678 type.go:168] "Request Body" body=""
	I1006 14:23:24.686009  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:24.686431  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:25.186039  649678 type.go:168] "Request Body" body=""
	I1006 14:23:25.186195  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:25.186614  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:25.686319  649678 type.go:168] "Request Body" body=""
	I1006 14:23:25.686432  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:25.686796  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:26.186364  649678 type.go:168] "Request Body" body=""
	I1006 14:23:26.186458  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:26.186842  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:26.186906  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:26.686757  649678 type.go:168] "Request Body" body=""
	I1006 14:23:26.686843  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:26.687175  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:27.186886  649678 type.go:168] "Request Body" body=""
	I1006 14:23:27.187004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:27.187400  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:27.685970  649678 type.go:168] "Request Body" body=""
	I1006 14:23:27.686086  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:27.686508  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:28.186097  649678 type.go:168] "Request Body" body=""
	I1006 14:23:28.186253  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:28.186667  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:28.686303  649678 type.go:168] "Request Body" body=""
	I1006 14:23:28.686394  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:28.686776  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:28.686869  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:29.186361  649678 type.go:168] "Request Body" body=""
	I1006 14:23:29.186534  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:29.186921  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:29.686617  649678 type.go:168] "Request Body" body=""
	I1006 14:23:29.686706  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:29.687093  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:30.185988  649678 type.go:168] "Request Body" body=""
	I1006 14:23:30.186107  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:30.186525  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:30.686173  649678 type.go:168] "Request Body" body=""
	I1006 14:23:30.686284  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:30.686704  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:31.186306  649678 type.go:168] "Request Body" body=""
	I1006 14:23:31.186416  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:31.186796  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:31.186865  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:31.686731  649678 type.go:168] "Request Body" body=""
	I1006 14:23:31.686818  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:31.687245  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:32.185868  649678 type.go:168] "Request Body" body=""
	I1006 14:23:32.185977  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:32.186468  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:32.686055  649678 type.go:168] "Request Body" body=""
	I1006 14:23:32.686249  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:32.686637  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:33.186245  649678 type.go:168] "Request Body" body=""
	I1006 14:23:33.186380  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:33.186741  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:33.686327  649678 type.go:168] "Request Body" body=""
	I1006 14:23:33.686421  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:33.686817  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:33.686882  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:34.186428  649678 type.go:168] "Request Body" body=""
	I1006 14:23:34.186519  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:34.186900  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:34.686601  649678 type.go:168] "Request Body" body=""
	I1006 14:23:34.686693  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:34.687174  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:35.186394  649678 type.go:168] "Request Body" body=""
	I1006 14:23:35.186495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:35.186830  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:35.686544  649678 type.go:168] "Request Body" body=""
	I1006 14:23:35.686676  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:35.687151  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:35.687249  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:36.186429  649678 type.go:168] "Request Body" body=""
	I1006 14:23:36.186525  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:36.186900  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:36.686821  649678 type.go:168] "Request Body" body=""
	I1006 14:23:36.686905  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:36.687296  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:37.185937  649678 type.go:168] "Request Body" body=""
	I1006 14:23:37.186041  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:37.186463  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:37.686057  649678 type.go:168] "Request Body" body=""
	I1006 14:23:37.686134  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:37.686537  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:38.186164  649678 type.go:168] "Request Body" body=""
	I1006 14:23:38.186301  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:38.186719  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:38.186784  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:38.212898  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:23:38.268129  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:38.271217  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:38.271448  649678 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:23:38.274179  649678 out.go:179] * Enabled addons: 
	I1006 14:23:38.275265  649678 addons.go:514] duration metric: took 1m48.200610857s for enable addons: enabled=[]
	I1006 14:23:38.686820  649678 type.go:168] "Request Body" body=""
	I1006 14:23:38.686904  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:38.687336  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:39.186242  649678 type.go:168] "Request Body" body=""
	I1006 14:23:39.186340  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:39.186728  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:39.686616  649678 type.go:168] "Request Body" body=""
	I1006 14:23:39.686713  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:39.687110  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:40.185923  649678 type.go:168] "Request Body" body=""
	I1006 14:23:40.186012  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:40.186440  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:40.686260  649678 type.go:168] "Request Body" body=""
	I1006 14:23:40.686360  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:40.686781  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:40.686870  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:41.186716  649678 type.go:168] "Request Body" body=""
	I1006 14:23:41.186846  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:41.187307  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:41.686117  649678 type.go:168] "Request Body" body=""
	I1006 14:23:41.686264  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:41.686651  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:42.186500  649678 type.go:168] "Request Body" body=""
	I1006 14:23:42.186601  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:42.187000  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:42.686853  649678 type.go:168] "Request Body" body=""
	I1006 14:23:42.686932  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:42.687293  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:42.687369  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:43.186081  649678 type.go:168] "Request Body" body=""
	I1006 14:23:43.186176  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:43.186615  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:43.686377  649678 type.go:168] "Request Body" body=""
	I1006 14:23:43.686461  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:43.686807  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:44.186682  649678 type.go:168] "Request Body" body=""
	I1006 14:23:44.186789  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:44.187155  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:44.685945  649678 type.go:168] "Request Body" body=""
	I1006 14:23:44.686029  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:44.686444  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:45.186221  649678 type.go:168] "Request Body" body=""
	I1006 14:23:45.186326  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:45.186717  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:45.186786  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:45.686681  649678 type.go:168] "Request Body" body=""
	I1006 14:23:45.686763  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:45.687135  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:46.185919  649678 type.go:168] "Request Body" body=""
	I1006 14:23:46.186010  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:46.186423  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:46.686119  649678 type.go:168] "Request Body" body=""
	I1006 14:23:46.686200  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:46.686594  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:47.186343  649678 type.go:168] "Request Body" body=""
	I1006 14:23:47.186428  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:47.186751  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:47.186812  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:47.686582  649678 type.go:168] "Request Body" body=""
	I1006 14:23:47.686670  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:47.687029  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:48.186905  649678 type.go:168] "Request Body" body=""
	I1006 14:23:48.187010  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:48.187415  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:48.686173  649678 type.go:168] "Request Body" body=""
	I1006 14:23:48.686274  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:48.686614  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:49.186426  649678 type.go:168] "Request Body" body=""
	I1006 14:23:49.186559  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:49.187170  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:49.187283  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:49.686055  649678 type.go:168] "Request Body" body=""
	I1006 14:23:49.686162  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:49.686567  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:50.186460  649678 type.go:168] "Request Body" body=""
	I1006 14:23:50.186578  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:50.186980  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:50.686657  649678 type.go:168] "Request Body" body=""
	I1006 14:23:50.686737  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:50.687102  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:51.186780  649678 type.go:168] "Request Body" body=""
	I1006 14:23:51.186879  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:51.187290  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:51.187357  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:51.686055  649678 type.go:168] "Request Body" body=""
	I1006 14:23:51.686146  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:51.686562  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:52.186152  649678 type.go:168] "Request Body" body=""
	I1006 14:23:52.186274  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:52.186663  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:52.686295  649678 type.go:168] "Request Body" body=""
	I1006 14:23:52.686384  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:52.686751  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:53.186373  649678 type.go:168] "Request Body" body=""
	I1006 14:23:53.186476  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:53.186876  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:53.686514  649678 type.go:168] "Request Body" body=""
	I1006 14:23:53.686604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:53.686953  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:53.687018  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:54.186624  649678 type.go:168] "Request Body" body=""
	I1006 14:23:54.186724  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:54.187084  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:54.686709  649678 type.go:168] "Request Body" body=""
	I1006 14:23:54.686798  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:54.687167  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:55.186814  649678 type.go:168] "Request Body" body=""
	I1006 14:23:55.186908  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:55.187298  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:55.685884  649678 type.go:168] "Request Body" body=""
	I1006 14:23:55.685966  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:55.686336  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:56.185959  649678 type.go:168] "Request Body" body=""
	I1006 14:23:56.186053  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:56.186474  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:56.186543  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:56.686257  649678 type.go:168] "Request Body" body=""
	I1006 14:23:56.686340  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:56.686714  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:57.186250  649678 type.go:168] "Request Body" body=""
	I1006 14:23:57.186346  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:57.186713  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:57.686338  649678 type.go:168] "Request Body" body=""
	I1006 14:23:57.686411  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:57.686758  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:58.186346  649678 type.go:168] "Request Body" body=""
	I1006 14:23:58.186462  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:58.186853  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:58.186925  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:58.686513  649678 type.go:168] "Request Body" body=""
	I1006 14:23:58.686597  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:58.686941  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:59.186651  649678 type.go:168] "Request Body" body=""
	I1006 14:23:59.186746  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:59.187144  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:59.686847  649678 type.go:168] "Request Body" body=""
	I1006 14:23:59.686928  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:59.687299  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:00.186276  649678 type.go:168] "Request Body" body=""
	I1006 14:24:00.186374  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:00.186780  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:00.686385  649678 type.go:168] "Request Body" body=""
	I1006 14:24:00.686467  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:00.686835  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:00.686902  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:01.186504  649678 type.go:168] "Request Body" body=""
	I1006 14:24:01.186604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:01.187011  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:01.686898  649678 type.go:168] "Request Body" body=""
	I1006 14:24:01.686984  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:01.687358  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:02.185992  649678 type.go:168] "Request Body" body=""
	I1006 14:24:02.186098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:02.186510  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:02.686060  649678 type.go:168] "Request Body" body=""
	I1006 14:24:02.686175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:02.686581  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:03.186144  649678 type.go:168] "Request Body" body=""
	I1006 14:24:03.186269  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:03.186664  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:03.186735  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:03.686298  649678 type.go:168] "Request Body" body=""
	I1006 14:24:03.686391  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:03.686764  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:04.186331  649678 type.go:168] "Request Body" body=""
	I1006 14:24:04.186436  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:04.186806  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:04.686453  649678 type.go:168] "Request Body" body=""
	I1006 14:24:04.686539  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:04.686904  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:05.186584  649678 type.go:168] "Request Body" body=""
	I1006 14:24:05.186677  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:05.187042  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:05.187118  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:05.686754  649678 type.go:168] "Request Body" body=""
	I1006 14:24:05.686838  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:05.687249  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:06.186882  649678 type.go:168] "Request Body" body=""
	I1006 14:24:06.186978  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:06.187376  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:06.686265  649678 type.go:168] "Request Body" body=""
	I1006 14:24:06.686360  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:06.686739  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:07.186388  649678 type.go:168] "Request Body" body=""
	I1006 14:24:07.186485  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:07.186890  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:07.686565  649678 type.go:168] "Request Body" body=""
	I1006 14:24:07.686740  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:07.687177  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:07.687301  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:08.186834  649678 type.go:168] "Request Body" body=""
	I1006 14:24:08.186933  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:08.187338  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:08.685923  649678 type.go:168] "Request Body" body=""
	I1006 14:24:08.686004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:08.686400  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:09.185982  649678 type.go:168] "Request Body" body=""
	I1006 14:24:09.186075  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:09.186486  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:09.686071  649678 type.go:168] "Request Body" body=""
	I1006 14:24:09.686147  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:09.686609  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:10.186339  649678 type.go:168] "Request Body" body=""
	I1006 14:24:10.186435  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:10.186832  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:10.186914  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:10.686410  649678 type.go:168] "Request Body" body=""
	I1006 14:24:10.686491  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:10.686878  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:11.186499  649678 type.go:168] "Request Body" body=""
	I1006 14:24:11.186603  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:11.186987  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:11.686993  649678 type.go:168] "Request Body" body=""
	I1006 14:24:11.687075  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:11.687486  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:12.186044  649678 type.go:168] "Request Body" body=""
	I1006 14:24:12.186144  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:12.186531  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:12.686100  649678 type.go:168] "Request Body" body=""
	I1006 14:24:12.686192  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:12.686612  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:12.686688  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:13.186239  649678 type.go:168] "Request Body" body=""
	I1006 14:24:13.186332  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:13.186729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:13.686339  649678 type.go:168] "Request Body" body=""
	I1006 14:24:13.686426  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:13.686827  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:14.186505  649678 type.go:168] "Request Body" body=""
	I1006 14:24:14.186600  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:14.186972  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:14.686706  649678 type.go:168] "Request Body" body=""
	I1006 14:24:14.686793  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:14.687271  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:14.687344  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:15.186857  649678 type.go:168] "Request Body" body=""
	I1006 14:24:15.186949  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:15.187318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:15.685900  649678 type.go:168] "Request Body" body=""
	I1006 14:24:15.686005  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:15.686504  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:16.186073  649678 type.go:168] "Request Body" body=""
	I1006 14:24:16.186167  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:16.186557  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:16.686576  649678 type.go:168] "Request Body" body=""
	I1006 14:24:16.686657  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:16.687039  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:17.186833  649678 type.go:168] "Request Body" body=""
	I1006 14:24:17.186929  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:17.187333  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:17.187429  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:17.685958  649678 type.go:168] "Request Body" body=""
	I1006 14:24:17.686060  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:17.686506  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:18.186267  649678 type.go:168] "Request Body" body=""
	I1006 14:24:18.186350  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:18.186723  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:18.686325  649678 type.go:168] "Request Body" body=""
	I1006 14:24:18.686420  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:18.686789  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:19.186406  649678 type.go:168] "Request Body" body=""
	I1006 14:24:19.186488  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:19.186868  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:19.686567  649678 type.go:168] "Request Body" body=""
	I1006 14:24:19.686656  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:19.687081  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:19.687166  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:20.185993  649678 type.go:168] "Request Body" body=""
	I1006 14:24:20.186081  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:20.186515  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:20.686127  649678 type.go:168] "Request Body" body=""
	I1006 14:24:20.686261  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:20.686672  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:21.186285  649678 type.go:168] "Request Body" body=""
	I1006 14:24:21.186378  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:21.186760  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:21.686689  649678 type.go:168] "Request Body" body=""
	I1006 14:24:21.686806  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:21.687270  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:21.687343  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:22.186875  649678 type.go:168] "Request Body" body=""
	I1006 14:24:22.186957  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:22.187340  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:22.685917  649678 type.go:168] "Request Body" body=""
	I1006 14:24:22.686001  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:22.686421  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:23.186006  649678 type.go:168] "Request Body" body=""
	I1006 14:24:23.186098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:23.186524  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:23.686088  649678 type.go:168] "Request Body" body=""
	I1006 14:24:23.686169  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:23.686561  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:24.186157  649678 type.go:168] "Request Body" body=""
	I1006 14:24:24.186277  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:24.186678  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:24.186752  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:24.686265  649678 type.go:168] "Request Body" body=""
	I1006 14:24:24.686340  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:24.686724  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:25.186308  649678 type.go:168] "Request Body" body=""
	I1006 14:24:25.186403  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:25.186836  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:25.686416  649678 type.go:168] "Request Body" body=""
	I1006 14:24:25.686502  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:25.686869  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:26.186513  649678 type.go:168] "Request Body" body=""
	I1006 14:24:26.186607  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:26.186966  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:26.187036  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:26.686743  649678 type.go:168] "Request Body" body=""
	I1006 14:24:26.686828  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:26.687232  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:27.186830  649678 type.go:168] "Request Body" body=""
	I1006 14:24:27.186956  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:27.187284  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:27.685892  649678 type.go:168] "Request Body" body=""
	I1006 14:24:27.685969  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:27.686452  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:28.185994  649678 type.go:168] "Request Body" body=""
	I1006 14:24:28.186085  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:28.186516  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:28.686092  649678 type.go:168] "Request Body" body=""
	I1006 14:24:28.686226  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:28.686600  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:28.686667  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:29.186232  649678 type.go:168] "Request Body" body=""
	I1006 14:24:29.186318  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:29.186686  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:29.686272  649678 type.go:168] "Request Body" body=""
	I1006 14:24:29.686385  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:29.686803  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:30.186682  649678 type.go:168] "Request Body" body=""
	I1006 14:24:30.186770  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:30.187128  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:30.686899  649678 type.go:168] "Request Body" body=""
	I1006 14:24:30.687000  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:30.687446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:30.687521  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:31.186005  649678 type.go:168] "Request Body" body=""
	I1006 14:24:31.186092  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:31.186508  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:31.686473  649678 type.go:168] "Request Body" body=""
	I1006 14:24:31.686563  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:31.686985  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:32.186673  649678 type.go:168] "Request Body" body=""
	I1006 14:24:32.186756  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:32.187112  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:32.686831  649678 type.go:168] "Request Body" body=""
	I1006 14:24:32.686918  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:32.687304  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:33.185919  649678 type.go:168] "Request Body" body=""
	I1006 14:24:33.186004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:33.186403  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:33.186477  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:33.685961  649678 type.go:168] "Request Body" body=""
	I1006 14:24:33.686072  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:33.686452  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:34.186033  649678 type.go:168] "Request Body" body=""
	I1006 14:24:34.186116  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:34.186521  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:34.686098  649678 type.go:168] "Request Body" body=""
	I1006 14:24:34.686233  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:34.686619  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:35.186193  649678 type.go:168] "Request Body" body=""
	I1006 14:24:35.186290  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:35.186663  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:35.186737  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:35.686293  649678 type.go:168] "Request Body" body=""
	I1006 14:24:35.686406  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:35.686798  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:36.186344  649678 type.go:168] "Request Body" body=""
	I1006 14:24:36.186419  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:36.186746  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:36.686564  649678 type.go:168] "Request Body" body=""
	I1006 14:24:36.686654  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:36.687044  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:37.186671  649678 type.go:168] "Request Body" body=""
	I1006 14:24:37.186749  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:37.187101  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:37.187190  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:37.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:24:37.686844  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:37.687282  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:38.186015  649678 type.go:168] "Request Body" body=""
	I1006 14:24:38.186100  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:38.186512  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:38.686083  649678 type.go:168] "Request Body" body=""
	I1006 14:24:38.686160  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:38.686534  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:39.186147  649678 type.go:168] "Request Body" body=""
	I1006 14:24:39.186264  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:39.186629  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:39.686351  649678 type.go:168] "Request Body" body=""
	I1006 14:24:39.686445  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:39.686834  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:39.686903  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:40.186723  649678 type.go:168] "Request Body" body=""
	I1006 14:24:40.186824  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:40.187257  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:40.686897  649678 type.go:168] "Request Body" body=""
	I1006 14:24:40.686983  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:40.687415  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:41.186000  649678 type.go:168] "Request Body" body=""
	I1006 14:24:41.186080  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:41.186497  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:41.686311  649678 type.go:168] "Request Body" body=""
	I1006 14:24:41.686398  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:41.686747  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:42.186394  649678 type.go:168] "Request Body" body=""
	I1006 14:24:42.186477  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:42.186829  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:42.186909  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:42.686365  649678 type.go:168] "Request Body" body=""
	I1006 14:24:42.686458  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:42.686828  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:43.186364  649678 type.go:168] "Request Body" body=""
	I1006 14:24:43.186453  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:43.186835  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:43.686404  649678 type.go:168] "Request Body" body=""
	I1006 14:24:43.686479  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:43.686829  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:44.186419  649678 type.go:168] "Request Body" body=""
	I1006 14:24:44.186497  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:44.186840  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:44.686503  649678 type.go:168] "Request Body" body=""
	I1006 14:24:44.686579  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:44.686908  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:44.686976  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:45.186546  649678 type.go:168] "Request Body" body=""
	I1006 14:24:45.186633  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:45.186973  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:45.686633  649678 type.go:168] "Request Body" body=""
	I1006 14:24:45.686722  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:45.687066  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:46.186715  649678 type.go:168] "Request Body" body=""
	I1006 14:24:46.186798  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:46.187164  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:46.686921  649678 type.go:168] "Request Body" body=""
	I1006 14:24:46.687008  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:46.687441  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:46.687511  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:47.186093  649678 type.go:168] "Request Body" body=""
	I1006 14:24:47.186175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:47.186548  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:47.686128  649678 type.go:168] "Request Body" body=""
	I1006 14:24:47.686233  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:47.686613  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:48.186260  649678 type.go:168] "Request Body" body=""
	I1006 14:24:48.186345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:48.186715  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:48.686317  649678 type.go:168] "Request Body" body=""
	I1006 14:24:48.686416  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:48.686787  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:49.186383  649678 type.go:168] "Request Body" body=""
	I1006 14:24:49.186483  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:49.186862  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:49.186934  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:49.686547  649678 type.go:168] "Request Body" body=""
	I1006 14:24:49.686630  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:49.687018  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:50.186932  649678 type.go:168] "Request Body" body=""
	I1006 14:24:50.187020  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:50.187392  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:50.685995  649678 type.go:168] "Request Body" body=""
	I1006 14:24:50.686087  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:50.686639  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:51.186241  649678 type.go:168] "Request Body" body=""
	I1006 14:24:51.186321  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:51.186677  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:51.686524  649678 type.go:168] "Request Body" body=""
	I1006 14:24:51.686604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:51.686971  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:51.687045  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:52.186636  649678 type.go:168] "Request Body" body=""
	I1006 14:24:52.186724  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:52.187108  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:52.686753  649678 type.go:168] "Request Body" body=""
	I1006 14:24:52.686831  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:52.687267  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:53.185896  649678 type.go:168] "Request Body" body=""
	I1006 14:24:53.185979  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:53.186366  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:53.685914  649678 type.go:168] "Request Body" body=""
	I1006 14:24:53.685990  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:53.686334  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:54.185922  649678 type.go:168] "Request Body" body=""
	I1006 14:24:54.186002  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:54.186408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:54.186489  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:54.685967  649678 type.go:168] "Request Body" body=""
	I1006 14:24:54.686051  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:54.686451  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:55.186040  649678 type.go:168] "Request Body" body=""
	I1006 14:24:55.186122  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:55.186477  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:55.686036  649678 type.go:168] "Request Body" body=""
	I1006 14:24:55.686113  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:55.686480  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:56.186026  649678 type.go:168] "Request Body" body=""
	I1006 14:24:56.186104  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:56.186478  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:56.186550  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:56.686248  649678 type.go:168] "Request Body" body=""
	I1006 14:24:56.686329  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:56.686693  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:57.186234  649678 type.go:168] "Request Body" body=""
	I1006 14:24:57.186315  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:57.186630  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:57.686283  649678 type.go:168] "Request Body" body=""
	I1006 14:24:57.686402  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:57.686814  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:58.186365  649678 type.go:168] "Request Body" body=""
	I1006 14:24:58.186450  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:58.186794  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:58.186858  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:58.686485  649678 type.go:168] "Request Body" body=""
	I1006 14:24:58.686625  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:58.687000  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:59.186645  649678 type.go:168] "Request Body" body=""
	I1006 14:24:59.186728  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:59.187067  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:59.686701  649678 type.go:168] "Request Body" body=""
	I1006 14:24:59.686778  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:59.687158  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:00.185971  649678 type.go:168] "Request Body" body=""
	I1006 14:25:00.186051  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:00.186405  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:00.686037  649678 type.go:168] "Request Body" body=""
	I1006 14:25:00.686117  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:00.686528  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:00.686606  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:01.186098  649678 type.go:168] "Request Body" body=""
	I1006 14:25:01.186186  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:01.186639  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:01.686574  649678 type.go:168] "Request Body" body=""
	I1006 14:25:01.686664  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:01.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:02.186731  649678 type.go:168] "Request Body" body=""
	I1006 14:25:02.186819  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:02.187259  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:02.685880  649678 type.go:168] "Request Body" body=""
	I1006 14:25:02.685972  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:02.686460  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:03.186037  649678 type.go:168] "Request Body" body=""
	I1006 14:25:03.186117  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:03.186526  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:03.186595  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:03.686186  649678 type.go:168] "Request Body" body=""
	I1006 14:25:03.686282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:03.686638  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:04.186251  649678 type.go:168] "Request Body" body=""
	I1006 14:25:04.186325  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:04.186672  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:04.686261  649678 type.go:168] "Request Body" body=""
	I1006 14:25:04.686346  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:04.686697  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:05.186293  649678 type.go:168] "Request Body" body=""
	I1006 14:25:05.186374  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:05.186780  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:05.186857  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:05.686332  649678 type.go:168] "Request Body" body=""
	I1006 14:25:05.686416  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:05.686772  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:06.186370  649678 type.go:168] "Request Body" body=""
	I1006 14:25:06.186449  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:06.186819  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:06.686670  649678 type.go:168] "Request Body" body=""
	I1006 14:25:06.686749  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:06.687114  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:07.186765  649678 type.go:168] "Request Body" body=""
	I1006 14:25:07.186854  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:07.187255  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:07.187328  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:07.686866  649678 type.go:168] "Request Body" body=""
	I1006 14:25:07.686945  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:07.687337  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:08.185991  649678 type.go:168] "Request Body" body=""
	I1006 14:25:08.186073  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:08.186473  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:08.686026  649678 type.go:168] "Request Body" body=""
	I1006 14:25:08.686101  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:08.686467  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:09.186027  649678 type.go:168] "Request Body" body=""
	I1006 14:25:09.186117  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:09.186491  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:09.686131  649678 type.go:168] "Request Body" body=""
	I1006 14:25:09.686218  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:09.686554  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:09.686624  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:10.186421  649678 type.go:168] "Request Body" body=""
	I1006 14:25:10.186509  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:10.186885  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:10.686589  649678 type.go:168] "Request Body" body=""
	I1006 14:25:10.686673  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:10.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:11.186451  649678 type.go:168] "Request Body" body=""
	I1006 14:25:11.186534  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:11.186908  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:11.686874  649678 type.go:168] "Request Body" body=""
	I1006 14:25:11.686958  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:11.687404  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:11.687478  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:12.186004  649678 type.go:168] "Request Body" body=""
	I1006 14:25:12.186089  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:12.186488  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:12.686071  649678 type.go:168] "Request Body" body=""
	I1006 14:25:12.686175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:12.686583  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:13.186311  649678 type.go:168] "Request Body" body=""
	I1006 14:25:13.186394  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:13.186794  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:13.686469  649678 type.go:168] "Request Body" body=""
	I1006 14:25:13.686560  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:13.686955  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:14.186674  649678 type.go:168] "Request Body" body=""
	I1006 14:25:14.186764  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:14.187198  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:14.187305  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:14.686830  649678 type.go:168] "Request Body" body=""
	I1006 14:25:14.686915  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:14.687318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:15.185883  649678 type.go:168] "Request Body" body=""
	I1006 14:25:15.185963  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:15.186381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:15.685988  649678 type.go:168] "Request Body" body=""
	I1006 14:25:15.686075  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:15.686471  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:16.186057  649678 type.go:168] "Request Body" body=""
	I1006 14:25:16.186159  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:16.186628  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:16.686506  649678 type.go:168] "Request Body" body=""
	I1006 14:25:16.686586  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:16.686922  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:16.686991  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:17.186686  649678 type.go:168] "Request Body" body=""
	I1006 14:25:17.186779  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:17.187190  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:17.686871  649678 type.go:168] "Request Body" body=""
	I1006 14:25:17.686958  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:17.687378  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:18.185930  649678 type.go:168] "Request Body" body=""
	I1006 14:25:18.186011  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:18.186362  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:18.686006  649678 type.go:168] "Request Body" body=""
	I1006 14:25:18.686091  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:18.686522  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:19.186154  649678 type.go:168] "Request Body" body=""
	I1006 14:25:19.186270  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:19.186661  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:19.186738  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:19.686272  649678 type.go:168] "Request Body" body=""
	I1006 14:25:19.686357  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:19.686722  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:20.186620  649678 type.go:168] "Request Body" body=""
	I1006 14:25:20.186712  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:20.187085  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:20.686732  649678 type.go:168] "Request Body" body=""
	I1006 14:25:20.686813  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:20.687200  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:21.186886  649678 type.go:168] "Request Body" body=""
	I1006 14:25:21.186971  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:21.187421  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:21.187498  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:21.686192  649678 type.go:168] "Request Body" body=""
	I1006 14:25:21.686313  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:21.686703  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:22.186337  649678 type.go:168] "Request Body" body=""
	I1006 14:25:22.186443  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:22.186816  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:22.686392  649678 type.go:168] "Request Body" body=""
	I1006 14:25:22.686470  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:22.686872  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:23.186538  649678 type.go:168] "Request Body" body=""
	I1006 14:25:23.186623  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:23.186990  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:23.686645  649678 type.go:168] "Request Body" body=""
	I1006 14:25:23.686745  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:23.687147  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:23.687255  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:24.186838  649678 type.go:168] "Request Body" body=""
	I1006 14:25:24.186917  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:24.187309  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:24.685862  649678 type.go:168] "Request Body" body=""
	I1006 14:25:24.685944  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:24.686370  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:25.185903  649678 type.go:168] "Request Body" body=""
	I1006 14:25:25.185979  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:25.186373  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:25.685951  649678 type.go:168] "Request Body" body=""
	I1006 14:25:25.686032  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:25.686450  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:26.186018  649678 type.go:168] "Request Body" body=""
	I1006 14:25:26.186098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:26.186497  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:26.186566  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:26.686293  649678 type.go:168] "Request Body" body=""
	I1006 14:25:26.686378  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:26.686746  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:27.186364  649678 type.go:168] "Request Body" body=""
	I1006 14:25:27.186454  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:27.186827  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:27.686418  649678 type.go:168] "Request Body" body=""
	I1006 14:25:27.686503  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:27.686844  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:28.186581  649678 type.go:168] "Request Body" body=""
	I1006 14:25:28.186676  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:28.187085  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:28.187196  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:28.686665  649678 type.go:168] "Request Body" body=""
	I1006 14:25:28.686737  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:28.687051  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:29.186712  649678 type.go:168] "Request Body" body=""
	I1006 14:25:29.186801  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:29.187161  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:29.685861  649678 type.go:168] "Request Body" body=""
	I1006 14:25:29.685951  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:29.686323  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:30.186241  649678 type.go:168] "Request Body" body=""
	I1006 14:25:30.186336  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:30.186725  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:30.686347  649678 type.go:168] "Request Body" body=""
	I1006 14:25:30.686438  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:30.686799  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:30.686867  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:31.186356  649678 type.go:168] "Request Body" body=""
	I1006 14:25:31.186436  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:31.186790  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:31.686720  649678 type.go:168] "Request Body" body=""
	I1006 14:25:31.686801  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:31.687239  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:32.186431  649678 type.go:168] "Request Body" body=""
	I1006 14:25:32.186515  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:32.186873  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:32.686520  649678 type.go:168] "Request Body" body=""
	I1006 14:25:32.686601  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:32.686977  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:32.687047  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:33.186626  649678 type.go:168] "Request Body" body=""
	I1006 14:25:33.186710  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:33.187075  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:33.686716  649678 type.go:168] "Request Body" body=""
	I1006 14:25:33.686805  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:33.687167  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:34.186823  649678 type.go:168] "Request Body" body=""
	I1006 14:25:34.186903  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:34.187273  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:34.685846  649678 type.go:168] "Request Body" body=""
	I1006 14:25:34.685928  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:34.686316  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:35.185913  649678 type.go:168] "Request Body" body=""
	I1006 14:25:35.186011  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:35.186468  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:35.186536  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:35.686056  649678 type.go:168] "Request Body" body=""
	I1006 14:25:35.686142  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:35.686600  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:36.186122  649678 type.go:168] "Request Body" body=""
	I1006 14:25:36.186200  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:36.186601  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:36.686430  649678 type.go:168] "Request Body" body=""
	I1006 14:25:36.686510  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:36.686854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:37.186453  649678 type.go:168] "Request Body" body=""
	I1006 14:25:37.186544  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:37.186881  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:37.186946  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:37.686555  649678 type.go:168] "Request Body" body=""
	I1006 14:25:37.686635  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:37.686983  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:38.186591  649678 type.go:168] "Request Body" body=""
	I1006 14:25:38.186672  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:38.187012  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:38.686677  649678 type.go:168] "Request Body" body=""
	I1006 14:25:38.686752  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:38.687074  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:39.186406  649678 type.go:168] "Request Body" body=""
	I1006 14:25:39.186486  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:39.186779  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:39.686380  649678 type.go:168] "Request Body" body=""
	I1006 14:25:39.686456  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:39.686788  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:39.686849  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:40.186552  649678 type.go:168] "Request Body" body=""
	I1006 14:25:40.186636  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:40.186983  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:40.686686  649678 type.go:168] "Request Body" body=""
	I1006 14:25:40.686772  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:40.687136  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:41.186786  649678 type.go:168] "Request Body" body=""
	I1006 14:25:41.186883  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:41.187296  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:41.686115  649678 type.go:168] "Request Body" body=""
	I1006 14:25:41.686197  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:41.686611  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:42.186247  649678 type.go:168] "Request Body" body=""
	I1006 14:25:42.186349  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:42.186752  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:42.186818  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:42.686348  649678 type.go:168] "Request Body" body=""
	I1006 14:25:42.686429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:42.686809  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:43.186383  649678 type.go:168] "Request Body" body=""
	I1006 14:25:43.186476  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:43.186825  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:43.686373  649678 type.go:168] "Request Body" body=""
	I1006 14:25:43.686447  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:43.686785  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:44.186380  649678 type.go:168] "Request Body" body=""
	I1006 14:25:44.186471  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:44.186817  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:44.186878  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:44.686508  649678 type.go:168] "Request Body" body=""
	I1006 14:25:44.686586  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:44.686949  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:45.186631  649678 type.go:168] "Request Body" body=""
	I1006 14:25:45.186709  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:45.187070  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:45.686683  649678 type.go:168] "Request Body" body=""
	I1006 14:25:45.686760  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:45.687117  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:46.186771  649678 type.go:168] "Request Body" body=""
	I1006 14:25:46.186856  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:46.187161  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:46.187239  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:46.685960  649678 type.go:168] "Request Body" body=""
	I1006 14:25:46.686053  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:46.686491  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:47.186117  649678 type.go:168] "Request Body" body=""
	I1006 14:25:47.186232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:47.186563  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:47.686262  649678 type.go:168] "Request Body" body=""
	I1006 14:25:47.686353  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:47.686735  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:48.186344  649678 type.go:168] "Request Body" body=""
	I1006 14:25:48.186436  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:48.186775  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:48.686380  649678 type.go:168] "Request Body" body=""
	I1006 14:25:48.686477  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:48.686837  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:48.686901  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:49.186520  649678 type.go:168] "Request Body" body=""
	I1006 14:25:49.186599  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:49.186960  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:49.686576  649678 type.go:168] "Request Body" body=""
	I1006 14:25:49.686696  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:49.687078  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:50.186881  649678 type.go:168] "Request Body" body=""
	I1006 14:25:50.186973  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:50.187437  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:50.685990  649678 type.go:168] "Request Body" body=""
	I1006 14:25:50.686074  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:50.686473  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:51.186300  649678 type.go:168] "Request Body" body=""
	I1006 14:25:51.186379  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:51.186743  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:51.186811  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:51.686703  649678 type.go:168] "Request Body" body=""
	I1006 14:25:51.686798  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:51.687173  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:52.186898  649678 type.go:168] "Request Body" body=""
	I1006 14:25:52.186995  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:52.187412  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:52.686051  649678 type.go:168] "Request Body" body=""
	I1006 14:25:52.686131  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:52.686542  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:53.186148  649678 type.go:168] "Request Body" body=""
	I1006 14:25:53.186271  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:53.186618  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:53.686257  649678 type.go:168] "Request Body" body=""
	I1006 14:25:53.686333  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:53.686629  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:53.686692  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:54.186270  649678 type.go:168] "Request Body" body=""
	I1006 14:25:54.186349  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:54.186708  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:54.686271  649678 type.go:168] "Request Body" body=""
	I1006 14:25:54.686350  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:54.686763  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:55.186342  649678 type.go:168] "Request Body" body=""
	I1006 14:25:55.186429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:55.186784  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:55.686364  649678 type.go:168] "Request Body" body=""
	I1006 14:25:55.686460  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:55.686892  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:55.686972  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:56.186543  649678 type.go:168] "Request Body" body=""
	I1006 14:25:56.186621  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:56.186957  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:56.686715  649678 type.go:168] "Request Body" body=""
	I1006 14:25:56.686790  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:56.687141  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:57.186851  649678 type.go:168] "Request Body" body=""
	I1006 14:25:57.186936  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:57.187306  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:57.686906  649678 type.go:168] "Request Body" body=""
	I1006 14:25:57.686983  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:57.687342  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:57.687412  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:58.185932  649678 type.go:168] "Request Body" body=""
	I1006 14:25:58.186017  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:58.186400  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:58.685929  649678 type.go:168] "Request Body" body=""
	I1006 14:25:58.686006  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:58.686337  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:59.185922  649678 type.go:168] "Request Body" body=""
	I1006 14:25:59.186001  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:59.186386  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:59.685924  649678 type.go:168] "Request Body" body=""
	I1006 14:25:59.686004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:59.686375  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:00.186296  649678 type.go:168] "Request Body" body=""
	I1006 14:26:00.186378  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:00.186687  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:00.186765  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:00.686277  649678 type.go:168] "Request Body" body=""
	I1006 14:26:00.686360  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:00.686729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:01.186343  649678 type.go:168] "Request Body" body=""
	I1006 14:26:01.186429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:01.186799  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:01.686640  649678 type.go:168] "Request Body" body=""
	I1006 14:26:01.686731  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:01.687113  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:02.186812  649678 type.go:168] "Request Body" body=""
	I1006 14:26:02.186901  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:02.187298  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:02.187363  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:02.686912  649678 type.go:168] "Request Body" body=""
	I1006 14:26:02.686991  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:02.687387  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:03.186002  649678 type.go:168] "Request Body" body=""
	I1006 14:26:03.186084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:03.186473  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:03.685977  649678 type.go:168] "Request Body" body=""
	I1006 14:26:03.686048  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:03.686381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:04.185981  649678 type.go:168] "Request Body" body=""
	I1006 14:26:04.186057  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:04.186423  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:04.685971  649678 type.go:168] "Request Body" body=""
	I1006 14:26:04.686060  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:04.686445  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:04.686508  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:05.186070  649678 type.go:168] "Request Body" body=""
	I1006 14:26:05.186157  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:05.186570  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:05.686148  649678 type.go:168] "Request Body" body=""
	I1006 14:26:05.686264  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:05.686629  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:06.186273  649678 type.go:168] "Request Body" body=""
	I1006 14:26:06.186358  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:06.186714  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:06.686539  649678 type.go:168] "Request Body" body=""
	I1006 14:26:06.686626  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:06.686991  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:06.687057  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:07.186691  649678 type.go:168] "Request Body" body=""
	I1006 14:26:07.186766  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:07.187071  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:07.686715  649678 type.go:168] "Request Body" body=""
	I1006 14:26:07.686797  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:07.687168  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:08.186877  649678 type.go:168] "Request Body" body=""
	I1006 14:26:08.186969  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:08.187376  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:08.685874  649678 type.go:168] "Request Body" body=""
	I1006 14:26:08.685947  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:08.686343  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:09.185901  649678 type.go:168] "Request Body" body=""
	I1006 14:26:09.185986  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:09.186361  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:09.186422  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:09.685934  649678 type.go:168] "Request Body" body=""
	I1006 14:26:09.686008  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:09.686381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:10.186337  649678 type.go:168] "Request Body" body=""
	I1006 14:26:10.186418  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:10.186799  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:10.686458  649678 type.go:168] "Request Body" body=""
	I1006 14:26:10.686543  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:10.686962  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:11.186624  649678 type.go:168] "Request Body" body=""
	I1006 14:26:11.186717  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:11.187101  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:11.187175  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:11.685850  649678 type.go:168] "Request Body" body=""
	I1006 14:26:11.685927  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:11.686323  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:12.185918  649678 type.go:168] "Request Body" body=""
	I1006 14:26:12.185998  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:12.186408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:12.686005  649678 type.go:168] "Request Body" body=""
	I1006 14:26:12.686089  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:12.686517  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:13.186107  649678 type.go:168] "Request Body" body=""
	I1006 14:26:13.186230  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:13.186588  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:13.686197  649678 type.go:168] "Request Body" body=""
	I1006 14:26:13.686355  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:13.686711  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:13.686772  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:14.186309  649678 type.go:168] "Request Body" body=""
	I1006 14:26:14.186392  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:14.186749  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:14.686366  649678 type.go:168] "Request Body" body=""
	I1006 14:26:14.686450  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:14.686778  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:15.185991  649678 type.go:168] "Request Body" body=""
	I1006 14:26:15.186103  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:15.186529  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:15.686135  649678 type.go:168] "Request Body" body=""
	I1006 14:26:15.686243  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:15.686610  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:16.186323  649678 type.go:168] "Request Body" body=""
	I1006 14:26:16.186429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:16.186768  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:16.186838  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:16.686609  649678 type.go:168] "Request Body" body=""
	I1006 14:26:16.686694  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:16.687041  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:17.186702  649678 type.go:168] "Request Body" body=""
	I1006 14:26:17.186792  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:17.187231  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:17.686866  649678 type.go:168] "Request Body" body=""
	I1006 14:26:17.686950  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:17.687324  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:18.185952  649678 type.go:168] "Request Body" body=""
	I1006 14:26:18.186030  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:18.186428  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:18.685978  649678 type.go:168] "Request Body" body=""
	I1006 14:26:18.686051  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:18.686440  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:18.686507  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:19.186006  649678 type.go:168] "Request Body" body=""
	I1006 14:26:19.186087  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:19.186501  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:19.686063  649678 type.go:168] "Request Body" body=""
	I1006 14:26:19.686139  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:19.686531  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:20.186356  649678 type.go:168] "Request Body" body=""
	I1006 14:26:20.186443  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:20.186802  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:20.686408  649678 type.go:168] "Request Body" body=""
	I1006 14:26:20.686495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:20.686850  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:20.686922  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:21.186511  649678 type.go:168] "Request Body" body=""
	I1006 14:26:21.186587  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:21.186942  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:21.686813  649678 type.go:168] "Request Body" body=""
	I1006 14:26:21.686900  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:21.687313  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:22.185849  649678 type.go:168] "Request Body" body=""
	I1006 14:26:22.185931  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:22.186339  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:22.685929  649678 type.go:168] "Request Body" body=""
	I1006 14:26:22.686007  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:22.686413  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:23.186016  649678 type.go:168] "Request Body" body=""
	I1006 14:26:23.186102  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:23.186494  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:23.186565  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:23.686035  649678 type.go:168] "Request Body" body=""
	I1006 14:26:23.686107  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:23.686482  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:24.186086  649678 type.go:168] "Request Body" body=""
	I1006 14:26:24.186175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:24.186554  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:24.686126  649678 type.go:168] "Request Body" body=""
	I1006 14:26:24.686237  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:24.686577  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:25.186280  649678 type.go:168] "Request Body" body=""
	I1006 14:26:25.186363  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:25.186729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:25.186793  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:25.686357  649678 type.go:168] "Request Body" body=""
	I1006 14:26:25.686450  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:25.686832  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:26.186509  649678 type.go:168] "Request Body" body=""
	I1006 14:26:26.186599  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:26.186933  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:26.686731  649678 type.go:168] "Request Body" body=""
	I1006 14:26:26.686807  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:26.687178  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:27.186830  649678 type.go:168] "Request Body" body=""
	I1006 14:26:27.186916  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:27.187303  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:27.187367  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:27.685989  649678 type.go:168] "Request Body" body=""
	I1006 14:26:27.686079  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:27.686515  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:28.186104  649678 type.go:168] "Request Body" body=""
	I1006 14:26:28.186234  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:28.186665  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:28.686340  649678 type.go:168] "Request Body" body=""
	I1006 14:26:28.686435  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:28.686828  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:29.186495  649678 type.go:168] "Request Body" body=""
	I1006 14:26:29.186583  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:29.186957  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:29.686668  649678 type.go:168] "Request Body" body=""
	I1006 14:26:29.686747  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:29.687084  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:29.687155  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:30.185982  649678 type.go:168] "Request Body" body=""
	I1006 14:26:30.186084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:30.186533  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:30.686149  649678 type.go:168] "Request Body" body=""
	I1006 14:26:30.686258  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:30.686621  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:31.186197  649678 type.go:168] "Request Body" body=""
	I1006 14:26:31.186328  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:31.186681  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:31.686544  649678 type.go:168] "Request Body" body=""
	I1006 14:26:31.686625  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:31.687002  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:32.186625  649678 type.go:168] "Request Body" body=""
	I1006 14:26:32.186728  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:32.187110  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:32.187243  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:32.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:26:32.686849  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:32.687250  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:33.185866  649678 type.go:168] "Request Body" body=""
	I1006 14:26:33.185966  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:33.186401  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:33.685998  649678 type.go:168] "Request Body" body=""
	I1006 14:26:33.686076  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:33.686491  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:34.186036  649678 type.go:168] "Request Body" body=""
	I1006 14:26:34.186137  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:34.186537  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:34.686069  649678 type.go:168] "Request Body" body=""
	I1006 14:26:34.686144  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:34.686500  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:34.686564  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:35.186170  649678 type.go:168] "Request Body" body=""
	I1006 14:26:35.186296  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:35.186675  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:35.686291  649678 type.go:168] "Request Body" body=""
	I1006 14:26:35.686375  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:35.686758  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:36.186396  649678 type.go:168] "Request Body" body=""
	I1006 14:26:36.186499  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:36.186883  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:36.686651  649678 type.go:168] "Request Body" body=""
	I1006 14:26:36.686732  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:36.687079  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:36.687145  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:37.186756  649678 type.go:168] "Request Body" body=""
	I1006 14:26:37.186868  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:37.187300  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:37.685900  649678 type.go:168] "Request Body" body=""
	I1006 14:26:37.686015  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:37.686475  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:38.186110  649678 type.go:168] "Request Body" body=""
	I1006 14:26:38.186226  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:38.186598  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:38.686176  649678 type.go:168] "Request Body" body=""
	I1006 14:26:38.686303  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:38.686658  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:39.186240  649678 type.go:168] "Request Body" body=""
	I1006 14:26:39.186320  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:39.186682  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:39.186749  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:39.686298  649678 type.go:168] "Request Body" body=""
	I1006 14:26:39.686387  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:39.686746  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:40.186587  649678 type.go:168] "Request Body" body=""
	I1006 14:26:40.186667  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:40.187038  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:40.686696  649678 type.go:168] "Request Body" body=""
	I1006 14:26:40.686801  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:40.687169  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:41.186829  649678 type.go:168] "Request Body" body=""
	I1006 14:26:41.186908  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:41.187312  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:41.187383  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:41.686029  649678 type.go:168] "Request Body" body=""
	I1006 14:26:41.686108  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:41.686522  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:42.186071  649678 type.go:168] "Request Body" body=""
	I1006 14:26:42.186168  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:42.186549  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:42.686104  649678 type.go:168] "Request Body" body=""
	I1006 14:26:42.686190  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:42.686575  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:43.186140  649678 type.go:168] "Request Body" body=""
	I1006 14:26:43.186255  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:43.186605  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:43.686244  649678 type.go:168] "Request Body" body=""
	I1006 14:26:43.686321  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:43.686657  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:43.686731  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:44.186303  649678 type.go:168] "Request Body" body=""
	I1006 14:26:44.186390  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:44.186758  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:44.686323  649678 type.go:168] "Request Body" body=""
	I1006 14:26:44.686402  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:44.686737  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:45.186332  649678 type.go:168] "Request Body" body=""
	I1006 14:26:45.186410  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:45.186776  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:45.686331  649678 type.go:168] "Request Body" body=""
	I1006 14:26:45.686415  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:45.686779  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:45.686856  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:46.186339  649678 type.go:168] "Request Body" body=""
	I1006 14:26:46.186430  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:46.186785  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:46.686621  649678 type.go:168] "Request Body" body=""
	I1006 14:26:46.686715  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:46.687061  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:47.186713  649678 type.go:168] "Request Body" body=""
	I1006 14:26:47.186815  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:47.187185  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:47.686868  649678 type.go:168] "Request Body" body=""
	I1006 14:26:47.686957  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:47.687305  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:47.687372  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:48.185956  649678 type.go:168] "Request Body" body=""
	I1006 14:26:48.186058  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:48.186446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:48.686113  649678 type.go:168] "Request Body" body=""
	I1006 14:26:48.686236  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:48.686589  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:49.186156  649678 type.go:168] "Request Body" body=""
	I1006 14:26:49.186290  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:49.186679  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:49.686186  649678 type.go:168] "Request Body" body=""
	I1006 14:26:49.686282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:49.686588  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:50.186404  649678 type.go:168] "Request Body" body=""
	I1006 14:26:50.186506  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:50.186917  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:50.186990  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:50.686607  649678 type.go:168] "Request Body" body=""
	I1006 14:26:50.686695  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:50.687128  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:51.186788  649678 type.go:168] "Request Body" body=""
	I1006 14:26:51.186968  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:51.187381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:51.686169  649678 type.go:168] "Request Body" body=""
	I1006 14:26:51.686282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:51.686666  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:52.186376  649678 type.go:168] "Request Body" body=""
	I1006 14:26:52.186493  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:52.186854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:52.686550  649678 type.go:168] "Request Body" body=""
	I1006 14:26:52.686631  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:52.686915  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:52.686968  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:53.186633  649678 type.go:168] "Request Body" body=""
	I1006 14:26:53.186732  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:53.187095  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:53.686774  649678 type.go:168] "Request Body" body=""
	I1006 14:26:53.686871  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:53.687310  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:54.185884  649678 type.go:168] "Request Body" body=""
	I1006 14:26:54.185972  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:54.186391  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:54.685933  649678 type.go:168] "Request Body" body=""
	I1006 14:26:54.686006  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:54.686391  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:55.186064  649678 type.go:168] "Request Body" body=""
	I1006 14:26:55.186180  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:55.186574  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:55.186642  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:55.686159  649678 type.go:168] "Request Body" body=""
	I1006 14:26:55.686263  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:55.686668  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:56.186304  649678 type.go:168] "Request Body" body=""
	I1006 14:26:56.186418  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:56.186815  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:56.686705  649678 type.go:168] "Request Body" body=""
	I1006 14:26:56.686789  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:56.687169  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:57.186778  649678 type.go:168] "Request Body" body=""
	I1006 14:26:57.186869  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:57.187240  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:57.187304  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:57.685924  649678 type.go:168] "Request Body" body=""
	I1006 14:26:57.686000  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:57.686362  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:58.185951  649678 type.go:168] "Request Body" body=""
	I1006 14:26:58.186045  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:58.186445  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:58.685995  649678 type.go:168] "Request Body" body=""
	I1006 14:26:58.686071  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:58.686437  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:59.186003  649678 type.go:168] "Request Body" body=""
	I1006 14:26:59.186190  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:59.186571  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:59.686153  649678 type.go:168] "Request Body" body=""
	I1006 14:26:59.686257  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:59.686662  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:59.686725  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:00.186605  649678 type.go:168] "Request Body" body=""
	I1006 14:27:00.186714  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:00.187091  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:00.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:27:00.686859  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:00.687243  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:01.186928  649678 type.go:168] "Request Body" body=""
	I1006 14:27:01.187012  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:01.187398  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:01.686308  649678 type.go:168] "Request Body" body=""
	I1006 14:27:01.686391  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:01.686761  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:01.686839  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:02.186358  649678 type.go:168] "Request Body" body=""
	I1006 14:27:02.186439  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:02.186809  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:02.686423  649678 type.go:168] "Request Body" body=""
	I1006 14:27:02.686509  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:02.686907  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:03.186590  649678 type.go:168] "Request Body" body=""
	I1006 14:27:03.186676  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:03.187035  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:03.686678  649678 type.go:168] "Request Body" body=""
	I1006 14:27:03.686764  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:03.687130  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:03.687245  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:04.186807  649678 type.go:168] "Request Body" body=""
	I1006 14:27:04.186891  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:04.187266  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:04.686913  649678 type.go:168] "Request Body" body=""
	I1006 14:27:04.686987  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:04.687327  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:05.185951  649678 type.go:168] "Request Body" body=""
	I1006 14:27:05.186036  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:05.186442  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:05.685992  649678 type.go:168] "Request Body" body=""
	I1006 14:27:05.686068  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:05.686436  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:06.186013  649678 type.go:168] "Request Body" body=""
	I1006 14:27:06.186094  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:06.186496  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:06.186569  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:06.686265  649678 type.go:168] "Request Body" body=""
	I1006 14:27:06.686367  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:06.686740  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:07.186336  649678 type.go:168] "Request Body" body=""
	I1006 14:27:07.186417  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:07.186760  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:07.686331  649678 type.go:168] "Request Body" body=""
	I1006 14:27:07.686437  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:07.686806  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:08.186436  649678 type.go:168] "Request Body" body=""
	I1006 14:27:08.186520  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:08.186903  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:08.186969  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:08.686610  649678 type.go:168] "Request Body" body=""
	I1006 14:27:08.686699  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:08.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:09.186699  649678 type.go:168] "Request Body" body=""
	I1006 14:27:09.186792  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:09.187140  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:09.686782  649678 type.go:168] "Request Body" body=""
	I1006 14:27:09.686873  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:09.687256  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:10.185990  649678 type.go:168] "Request Body" body=""
	I1006 14:27:10.186073  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:10.186441  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:10.686081  649678 type.go:168] "Request Body" body=""
	I1006 14:27:10.686241  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:10.686611  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:10.686681  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:11.186246  649678 type.go:168] "Request Body" body=""
	I1006 14:27:11.186326  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:11.186676  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:11.686547  649678 type.go:168] "Request Body" body=""
	I1006 14:27:11.686634  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:11.686982  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:12.186629  649678 type.go:168] "Request Body" body=""
	I1006 14:27:12.186708  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:12.187095  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:12.686714  649678 type.go:168] "Request Body" body=""
	I1006 14:27:12.686808  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:12.687182  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:12.687301  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:13.186802  649678 type.go:168] "Request Body" body=""
	I1006 14:27:13.186882  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:13.187293  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:13.686883  649678 type.go:168] "Request Body" body=""
	I1006 14:27:13.686963  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:13.687307  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:14.185879  649678 type.go:168] "Request Body" body=""
	I1006 14:27:14.185967  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:14.186371  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:14.685892  649678 type.go:168] "Request Body" body=""
	I1006 14:27:14.685968  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:14.686306  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:15.185837  649678 type.go:168] "Request Body" body=""
	I1006 14:27:15.185912  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:15.186295  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:15.186372  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:15.685893  649678 type.go:168] "Request Body" body=""
	I1006 14:27:15.685969  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:15.686294  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:16.185990  649678 type.go:168] "Request Body" body=""
	I1006 14:27:16.186081  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:16.186492  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:16.686393  649678 type.go:168] "Request Body" body=""
	I1006 14:27:16.686478  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:16.686834  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:17.186384  649678 type.go:168] "Request Body" body=""
	I1006 14:27:17.186479  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:17.186834  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:17.186910  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:17.686523  649678 type.go:168] "Request Body" body=""
	I1006 14:27:17.686606  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:17.686989  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:18.186641  649678 type.go:168] "Request Body" body=""
	I1006 14:27:18.186739  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:18.187119  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:18.686755  649678 type.go:168] "Request Body" body=""
	I1006 14:27:18.686840  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:18.687189  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:19.186887  649678 type.go:168] "Request Body" body=""
	I1006 14:27:19.186975  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:19.187444  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:19.187516  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:19.686032  649678 type.go:168] "Request Body" body=""
	I1006 14:27:19.686111  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:19.686551  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:20.186447  649678 type.go:168] "Request Body" body=""
	I1006 14:27:20.186532  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:20.186905  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:20.686572  649678 type.go:168] "Request Body" body=""
	I1006 14:27:20.686660  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:20.687016  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:21.186692  649678 type.go:168] "Request Body" body=""
	I1006 14:27:21.186778  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:21.187150  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:21.685991  649678 type.go:168] "Request Body" body=""
	I1006 14:27:21.686073  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:21.686471  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:21.686536  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:22.186060  649678 type.go:168] "Request Body" body=""
	I1006 14:27:22.186159  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:22.186562  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:22.686161  649678 type.go:168] "Request Body" body=""
	I1006 14:27:22.686270  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:22.686631  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:23.186276  649678 type.go:168] "Request Body" body=""
	I1006 14:27:23.186365  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:23.186747  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:23.686349  649678 type.go:168] "Request Body" body=""
	I1006 14:27:23.686435  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:23.686810  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:23.686876  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:24.186408  649678 type.go:168] "Request Body" body=""
	I1006 14:27:24.186497  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:24.186870  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:24.686536  649678 type.go:168] "Request Body" body=""
	I1006 14:27:24.686611  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:24.686963  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:25.186632  649678 type.go:168] "Request Body" body=""
	I1006 14:27:25.186708  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:25.187049  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:25.686802  649678 type.go:168] "Request Body" body=""
	I1006 14:27:25.686882  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:25.687264  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:25.687322  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:26.185898  649678 type.go:168] "Request Body" body=""
	I1006 14:27:26.185976  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:26.186375  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:26.686124  649678 type.go:168] "Request Body" body=""
	I1006 14:27:26.686235  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:26.686552  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:27.186223  649678 type.go:168] "Request Body" body=""
	I1006 14:27:27.186300  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:27.186673  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:27.686275  649678 type.go:168] "Request Body" body=""
	I1006 14:27:27.686364  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:27.686719  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:28.186345  649678 type.go:168] "Request Body" body=""
	I1006 14:27:28.186434  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:28.186796  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:28.186861  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:28.686407  649678 type.go:168] "Request Body" body=""
	I1006 14:27:28.686495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:28.686858  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:29.186569  649678 type.go:168] "Request Body" body=""
	I1006 14:27:29.186651  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:29.187026  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:29.686656  649678 type.go:168] "Request Body" body=""
	I1006 14:27:29.686728  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:29.687080  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:30.185993  649678 type.go:168] "Request Body" body=""
	I1006 14:27:30.186084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:30.186484  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:30.686077  649678 type.go:168] "Request Body" body=""
	I1006 14:27:30.686155  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:30.686554  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:30.686627  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:31.186175  649678 type.go:168] "Request Body" body=""
	I1006 14:27:31.186286  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:31.186680  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:31.686528  649678 type.go:168] "Request Body" body=""
	I1006 14:27:31.686627  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:31.687001  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:32.186675  649678 type.go:168] "Request Body" body=""
	I1006 14:27:32.186758  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:32.187124  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:32.686856  649678 type.go:168] "Request Body" body=""
	I1006 14:27:32.686942  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:32.687307  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:32.687374  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:33.185899  649678 type.go:168] "Request Body" body=""
	I1006 14:27:33.185977  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:33.186402  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:33.685994  649678 type.go:168] "Request Body" body=""
	I1006 14:27:33.686074  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:33.686482  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:34.186077  649678 type.go:168] "Request Body" body=""
	I1006 14:27:34.186156  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:34.186558  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:34.686141  649678 type.go:168] "Request Body" body=""
	I1006 14:27:34.686238  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:34.686596  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:35.186192  649678 type.go:168] "Request Body" body=""
	I1006 14:27:35.186297  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:35.186668  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:35.186738  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:35.686376  649678 type.go:168] "Request Body" body=""
	I1006 14:27:35.686471  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:35.686827  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:36.186471  649678 type.go:168] "Request Body" body=""
	I1006 14:27:36.186549  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:36.186909  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:36.686773  649678 type.go:168] "Request Body" body=""
	I1006 14:27:36.686851  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:36.687225  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:37.186866  649678 type.go:168] "Request Body" body=""
	I1006 14:27:37.186943  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:37.187324  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:37.187402  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:37.685875  649678 type.go:168] "Request Body" body=""
	I1006 14:27:37.685951  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:37.686318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:38.185935  649678 type.go:168] "Request Body" body=""
	I1006 14:27:38.186022  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:38.186413  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:38.685990  649678 type.go:168] "Request Body" body=""
	I1006 14:27:38.686065  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:38.686446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:39.186040  649678 type.go:168] "Request Body" body=""
	I1006 14:27:39.186119  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:39.186517  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:39.686067  649678 type.go:168] "Request Body" body=""
	I1006 14:27:39.686152  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:39.686509  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:39.686570  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:40.186335  649678 type.go:168] "Request Body" body=""
	I1006 14:27:40.186421  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:40.186798  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:40.686383  649678 type.go:168] "Request Body" body=""
	I1006 14:27:40.686477  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:40.686843  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:41.186496  649678 type.go:168] "Request Body" body=""
	I1006 14:27:41.186589  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:41.186955  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:41.686485  649678 type.go:168] "Request Body" body=""
	I1006 14:27:41.686563  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:41.686938  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:41.687005  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:42.186439  649678 type.go:168] "Request Body" body=""
	I1006 14:27:42.186523  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:42.186890  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:42.686663  649678 type.go:168] "Request Body" body=""
	I1006 14:27:42.686739  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:42.687098  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:43.186774  649678 type.go:168] "Request Body" body=""
	I1006 14:27:43.186856  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:43.187251  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:43.686855  649678 type.go:168] "Request Body" body=""
	I1006 14:27:43.686937  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:43.687333  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:43.687401  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:44.185915  649678 type.go:168] "Request Body" body=""
	I1006 14:27:44.185993  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:44.186423  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:44.685989  649678 type.go:168] "Request Body" body=""
	I1006 14:27:44.686091  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:44.686498  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:45.186085  649678 type.go:168] "Request Body" body=""
	I1006 14:27:45.186165  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:45.186565  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:45.686116  649678 type.go:168] "Request Body" body=""
	I1006 14:27:45.686239  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:45.686593  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:46.186172  649678 type.go:168] "Request Body" body=""
	I1006 14:27:46.186282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:46.186664  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:46.186734  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:46.686523  649678 type.go:168] "Request Body" body=""
	I1006 14:27:46.686604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:46.686968  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:47.186636  649678 type.go:168] "Request Body" body=""
	I1006 14:27:47.186712  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:47.187063  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:47.686695  649678 type.go:168] "Request Body" body=""
	I1006 14:27:47.686772  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:47.687119  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:48.186827  649678 type.go:168] "Request Body" body=""
	I1006 14:27:48.186919  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:48.187317  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:48.187383  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:48.685929  649678 type.go:168] "Request Body" body=""
	I1006 14:27:48.686009  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:48.686363  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:49.185988  649678 type.go:168] "Request Body" body=""
	I1006 14:27:49.186066  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:49.186471  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:49.686018  649678 type.go:168] "Request Body" body=""
	I1006 14:27:49.686094  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:49.686456  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:50.186006  649678 node_ready.go:38] duration metric: took 6m0.000261558s for node "functional-135520" to be "Ready" ...
	I1006 14:27:50.189087  649678 out.go:203] 
	W1006 14:27:50.190513  649678 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1006 14:27:50.190545  649678 out.go:285] * 
	W1006 14:27:50.192353  649678 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:27:50.193614  649678 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:28:00 functional-135520 crio[2950]: time="2025-10-06T14:28:00.824420663Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=85218031-7b8c-433e-98e7-94ab0a5cb18e name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.130728219Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=21e95176-c1eb-4eac-a1c5-1b20ba3bb34f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.130852232Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=21e95176-c1eb-4eac-a1c5-1b20ba3bb34f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.130883972Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=21e95176-c1eb-4eac-a1c5-1b20ba3bb34f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.595902226Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=79489142-a558-431d-8fb7-23db9b1565ba name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.596021943Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=79489142-a558-431d-8fb7-23db9b1565ba name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.596050756Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=79489142-a558-431d-8fb7-23db9b1565ba name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.620844267Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=9e2bbeed-d602-4af9-8ab4-f9b8ab20dddb name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.620964771Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=9e2bbeed-d602-4af9-8ab4-f9b8ab20dddb name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.621003821Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=9e2bbeed-d602-4af9-8ab4-f9b8ab20dddb name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.645920535Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=35734698-0027-497c-b541-d0a0441dd042 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.646041194Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=35734698-0027-497c-b541-d0a0441dd042 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.646072758Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=35734698-0027-497c-b541-d0a0441dd042 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.116111529Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=8a30ea05-b08e-46c1-917b-0164344a7cc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.516234093Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=e677ac8f-d76b-4473-833d-002c35d4d82c name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.517126511Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=16949569-4475-4c02-a932-da141b5308d6 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.518121936Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-135520/kube-apiserver" id=53631056-42d2-4d65-99d6-fd09a0807f2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.518389488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.521879193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.522492085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.540395331Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=53631056-42d2-4d65-99d6-fd09a0807f2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.541809379Z" level=info msg="createCtr: deleting container ID 39a073daffb8d517b9bf89bc91f73d0ad67e3a285107108221dcddfcc68e842b from idIndex" id=53631056-42d2-4d65-99d6-fd09a0807f2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.541848123Z" level=info msg="createCtr: removing container 39a073daffb8d517b9bf89bc91f73d0ad67e3a285107108221dcddfcc68e842b" id=53631056-42d2-4d65-99d6-fd09a0807f2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.541881193Z" level=info msg="createCtr: deleting container 39a073daffb8d517b9bf89bc91f73d0ad67e3a285107108221dcddfcc68e842b from storage" id=53631056-42d2-4d65-99d6-fd09a0807f2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.54389405Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-135520_kube-system_64c921c0d544efd1faaa2d85c050bc13_0" id=53631056-42d2-4d65-99d6-fd09a0807f2a name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:28:03.575609    5312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:28:03.576308    5312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:28:03.577926    5312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:28:03.578416    5312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:28:03.580004    5312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:28:03 up  5:10,  0 user,  load average: 0.41, 0.37, 0.53
	Linux functional-135520 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:27:59 functional-135520 kubelet[1801]: E1006 14:27:59.515351    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:27:59 functional-135520 kubelet[1801]: E1006 14:27:59.515413    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:27:59 functional-135520 kubelet[1801]: E1006 14:27:59.551095    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:27:59 functional-135520 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:59 functional-135520 kubelet[1801]:  > podSandboxID="f122bf3cdcc12aa8e4b9a0e1655bceae045fdc99afe781ed4e5deffc77adf21d"
	Oct 06 14:27:59 functional-135520 kubelet[1801]: E1006 14:27:59.551182    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:27:59 functional-135520 kubelet[1801]:         container etcd start failed in pod etcd-functional-135520_kube-system(f24ebbe4b3fc964d32e35d345c0d3653): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:59 functional-135520 kubelet[1801]:  > logger="UnhandledError"
	Oct 06 14:27:59 functional-135520 kubelet[1801]: E1006 14:27:59.551233    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-135520" podUID="f24ebbe4b3fc964d32e35d345c0d3653"
	Oct 06 14:27:59 functional-135520 kubelet[1801]: E1006 14:27:59.551396    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:27:59 functional-135520 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:59 functional-135520 kubelet[1801]:  > podSandboxID="a92786c5eb4654629f78c624cdcfef7af25c891888e7f9c4c81b2755c377da1a"
	Oct 06 14:27:59 functional-135520 kubelet[1801]: E1006 14:27:59.551465    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:27:59 functional-135520 kubelet[1801]:         container kube-scheduler start failed in pod kube-scheduler-functional-135520_kube-system(5115bd1eba9594a3f2b99b5d6a4b9d59): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:59 functional-135520 kubelet[1801]:  > logger="UnhandledError"
	Oct 06 14:27:59 functional-135520 kubelet[1801]: E1006 14:27:59.552624    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-135520" podUID="5115bd1eba9594a3f2b99b5d6a4b9d59"
	Oct 06 14:28:00 functional-135520 kubelet[1801]: E1006 14:28:00.835444    1801 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-135520.186beca30fea008b\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-135520.186beca30fea008b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-135520,UID:functional-135520,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-135520 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-135520,},FirstTimestamp:2025-10-06 14:17:44.509128843 +0000 UTC m=+0.464938753,LastTimestamp:2025-10-06 14:17:44.510554344 +0000 UTC m=+0.466364247,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-135520,}"
	Oct 06 14:28:02 functional-135520 kubelet[1801]: E1006 14:28:02.515612    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:28:02 functional-135520 kubelet[1801]: E1006 14:28:02.544276    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:28:02 functional-135520 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:28:02 functional-135520 kubelet[1801]:  > podSandboxID="c8563dd0b37e233739b3c3a382aa7aa99838d00dddfb4c17bcee8072fc8b2e15"
	Oct 06 14:28:02 functional-135520 kubelet[1801]: E1006 14:28:02.544398    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:28:02 functional-135520 kubelet[1801]:         container kube-apiserver start failed in pod kube-apiserver-functional-135520_kube-system(64c921c0d544efd1faaa2d85c050bc13): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:28:02 functional-135520 kubelet[1801]:  > logger="UnhandledError"
	Oct 06 14:28:02 functional-135520 kubelet[1801]: E1006 14:28:02.544446    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-135520" podUID="64c921c0d544efd1faaa2d85c050bc13"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520: exit status 2 (297.501664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-135520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (2.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-135520 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-135520 get pods: exit status 1 (101.75666ms)

                                                
                                                
** stderr ** 
	E1006 14:28:04.472537  655656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:28:04.472900  655656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:28:04.474394  655656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:28:04.474679  655656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:28:04.476089  655656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-135520 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-135520
helpers_test.go:243: (dbg) docker inspect functional-135520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	        "Created": "2025-10-06T14:13:32.283355011Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:13:32.318096257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hosts",
	        "LogPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20-json.log",
	        "Name": "/functional-135520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-135520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-135520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	                "LowerDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-135520",
	                "Source": "/var/lib/docker/volumes/functional-135520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-135520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-135520",
	                "name.minikube.sigs.k8s.io": "functional-135520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6368ffca3e5840f94a34614c511d9f0a0a4ca0d05de4fe1f94c8bfdc332f1a62",
	            "SandboxKey": "/var/run/docker/netns/6368ffca3e58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-135520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d1:94:25:38:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f712be59dd18dac98bed5f234c9f77a39e85277143d6f46285adcd3b0185d552",
	                    "EndpointID": "b816964b653b1b5116e3262dfdc87af272931013ef5b9e2714c9ff7357118a6f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-135520",
	                        "3dd9a226ea42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520: exit status 2 (289.427326ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 logs -n 25
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-500584 --log_dir /tmp/nospam-500584 pause                                                              │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                            │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                            │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                            │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                               │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                               │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                               │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ delete  │ -p nospam-500584                                                                                              │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ start   │ -p functional-135520 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ start   │ -p functional-135520 --alsologtostderr -v=8                                                                   │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:21 UTC │                     │
	│ cache   │ functional-135520 cache add registry.k8s.io/pause:3.1                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:27 UTC │
	│ cache   │ functional-135520 cache add registry.k8s.io/pause:3.3                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:27 UTC │
	│ cache   │ functional-135520 cache add registry.k8s.io/pause:latest                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:27 UTC │
	│ cache   │ functional-135520 cache add minikube-local-cache-test:functional-135520                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ functional-135520 cache delete minikube-local-cache-test:functional-135520                                    │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl images                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │                     │
	│ cache   │ functional-135520 cache reload                                                                                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ kubectl │ functional-135520 kubectl -- --context functional-135520 get pods                                             │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:21:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:21:46.323016  649678 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:21:46.323271  649678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:21:46.323279  649678 out.go:374] Setting ErrFile to fd 2...
	I1006 14:21:46.323283  649678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:21:46.323475  649678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:21:46.323908  649678 out.go:368] Setting JSON to false
	I1006 14:21:46.324826  649678 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18242,"bootTime":1759742264,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:21:46.324926  649678 start.go:140] virtualization: kvm guest
	I1006 14:21:46.326925  649678 out.go:179] * [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:21:46.327942  649678 notify.go:220] Checking for updates...
	I1006 14:21:46.327965  649678 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:21:46.329155  649678 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:21:46.330229  649678 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:46.331298  649678 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:21:46.332353  649678 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:21:46.333341  649678 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:21:46.334666  649678 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:21:46.334805  649678 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:21:46.359710  649678 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:21:46.359861  649678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:21:46.415678  649678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:21:46.405264016 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:21:46.415787  649678 docker.go:318] overlay module found
	I1006 14:21:46.417155  649678 out.go:179] * Using the docker driver based on existing profile
	I1006 14:21:46.418292  649678 start.go:304] selected driver: docker
	I1006 14:21:46.418308  649678 start.go:924] validating driver "docker" against &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:46.418380  649678 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:21:46.418468  649678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:21:46.473903  649678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:21:46.464043789 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:21:46.474648  649678 cni.go:84] Creating CNI manager for ""
	I1006 14:21:46.474719  649678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:21:46.474770  649678 start.go:348] cluster config:
	{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:46.476311  649678 out.go:179] * Starting "functional-135520" primary control-plane node in "functional-135520" cluster
	I1006 14:21:46.477235  649678 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:21:46.478074  649678 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:21:46.479119  649678 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:21:46.479164  649678 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:21:46.479185  649678 cache.go:58] Caching tarball of preloaded images
	I1006 14:21:46.479228  649678 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:21:46.479294  649678 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:21:46.479309  649678 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:21:46.479413  649678 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/config.json ...
	I1006 14:21:46.499695  649678 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:21:46.499723  649678 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:21:46.499744  649678 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:21:46.499779  649678 start.go:360] acquireMachinesLock for functional-135520: {Name:mk634323c4619e77647ac9d9aaca492e399526ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:21:46.499864  649678 start.go:364] duration metric: took 47.895µs to acquireMachinesLock for "functional-135520"
	I1006 14:21:46.499886  649678 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:21:46.499892  649678 fix.go:54] fixHost starting: 
	I1006 14:21:46.500243  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:46.517601  649678 fix.go:112] recreateIfNeeded on functional-135520: state=Running err=<nil>
	W1006 14:21:46.517640  649678 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:21:46.519112  649678 out.go:252] * Updating the running docker "functional-135520" container ...
	I1006 14:21:46.519143  649678 machine.go:93] provisionDockerMachine start ...
	I1006 14:21:46.519223  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:46.537175  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:46.537424  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:46.537438  649678 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:21:46.682374  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:21:46.682420  649678 ubuntu.go:182] provisioning hostname "functional-135520"
	I1006 14:21:46.682484  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:46.700103  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:46.700382  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:46.700401  649678 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-135520 && echo "functional-135520" | sudo tee /etc/hostname
	I1006 14:21:46.853845  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:21:46.853924  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:46.872015  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:46.872265  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:46.872284  649678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-135520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-135520/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-135520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:21:47.017154  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:21:47.017184  649678 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:21:47.017239  649678 ubuntu.go:190] setting up certificates
	I1006 14:21:47.017253  649678 provision.go:84] configureAuth start
	I1006 14:21:47.017340  649678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:21:47.035104  649678 provision.go:143] copyHostCerts
	I1006 14:21:47.035140  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:21:47.035175  649678 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:21:47.035198  649678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:21:47.035336  649678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:21:47.035448  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:21:47.035468  649678 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:21:47.035478  649678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:21:47.035513  649678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:21:47.035575  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:21:47.035593  649678 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:21:47.035599  649678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:21:47.035623  649678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:21:47.035688  649678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.functional-135520 san=[127.0.0.1 192.168.49.2 functional-135520 localhost minikube]
	I1006 14:21:47.332166  649678 provision.go:177] copyRemoteCerts
	I1006 14:21:47.332258  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:21:47.332304  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.351185  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:47.453191  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:21:47.453264  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:21:47.470840  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:21:47.470907  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 14:21:47.487466  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:21:47.487518  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:21:47.504343  649678 provision.go:87] duration metric: took 487.07429ms to configureAuth
	I1006 14:21:47.504374  649678 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:21:47.504541  649678 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:21:47.504639  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.523029  649678 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:47.523280  649678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:21:47.523307  649678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:21:47.788227  649678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:21:47.788259  649678 machine.go:96] duration metric: took 1.269106143s to provisionDockerMachine
	I1006 14:21:47.788275  649678 start.go:293] postStartSetup for "functional-135520" (driver="docker")
	I1006 14:21:47.788290  649678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:21:47.788372  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:21:47.788428  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.805850  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:47.908894  649678 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:21:47.912773  649678 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1006 14:21:47.912795  649678 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1006 14:21:47.912801  649678 command_runner.go:130] > VERSION_ID="12"
	I1006 14:21:47.912807  649678 command_runner.go:130] > VERSION="12 (bookworm)"
	I1006 14:21:47.912813  649678 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1006 14:21:47.912819  649678 command_runner.go:130] > ID=debian
	I1006 14:21:47.912827  649678 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1006 14:21:47.912834  649678 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1006 14:21:47.912843  649678 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1006 14:21:47.912900  649678 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:21:47.912919  649678 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:21:47.912929  649678 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:21:47.912988  649678 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:21:47.913065  649678 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:21:47.913078  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:21:47.913143  649678 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> hosts in /etc/test/nested/copy/629719
	I1006 14:21:47.913151  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> /etc/test/nested/copy/629719/hosts
	I1006 14:21:47.913182  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/629719
	I1006 14:21:47.920839  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:21:47.937786  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts --> /etc/test/nested/copy/629719/hosts (40 bytes)
	I1006 14:21:47.954760  649678 start.go:296] duration metric: took 166.455369ms for postStartSetup
	I1006 14:21:47.954834  649678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:21:47.954870  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:47.972368  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:48.072535  649678 command_runner.go:130] > 38%
	I1006 14:21:48.072624  649678 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:21:48.077267  649678 command_runner.go:130] > 182G
	I1006 14:21:48.077574  649678 fix.go:56] duration metric: took 1.577678011s for fixHost
	I1006 14:21:48.077595  649678 start.go:83] releasing machines lock for "functional-135520", held for 1.577717734s
	I1006 14:21:48.077675  649678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:21:48.095670  649678 ssh_runner.go:195] Run: cat /version.json
	I1006 14:21:48.095722  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:48.095754  649678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:21:48.095827  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:48.113591  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:48.115313  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:48.268773  649678 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1006 14:21:48.268839  649678 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1006 14:21:48.268953  649678 ssh_runner.go:195] Run: systemctl --version
	I1006 14:21:48.275683  649678 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1006 14:21:48.275717  649678 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1006 14:21:48.275778  649678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:21:48.311695  649678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1006 14:21:48.316662  649678 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1006 14:21:48.316719  649678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:21:48.316778  649678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:21:48.324682  649678 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:21:48.324705  649678 start.go:495] detecting cgroup driver to use...
	I1006 14:21:48.324740  649678 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:21:48.324780  649678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:21:48.339343  649678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:21:48.350971  649678 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:21:48.351020  649678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:21:48.364377  649678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:21:48.375810  649678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:21:48.466998  649678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:21:48.555437  649678 docker.go:234] disabling docker service ...
	I1006 14:21:48.555507  649678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:21:48.569642  649678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:21:48.581371  649678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:21:48.660341  649678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:21:48.745051  649678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:21:48.757689  649678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:21:48.770829  649678 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1006 14:21:48.771733  649678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:21:48.771806  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.781084  649678 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:21:48.781164  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.790125  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.798751  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.807637  649678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:21:48.815986  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.824650  649678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.832873  649678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:48.841368  649678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:21:48.847999  649678 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1006 14:21:48.848646  649678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:21:48.855735  649678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:48.941247  649678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:21:49.054732  649678 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:21:49.054813  649678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:21:49.059042  649678 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1006 14:21:49.059070  649678 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1006 14:21:49.059079  649678 command_runner.go:130] > Device: 0,59	Inode: 3845        Links: 1
	I1006 14:21:49.059086  649678 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 14:21:49.059091  649678 command_runner.go:130] > Access: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059104  649678 command_runner.go:130] > Modify: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059109  649678 command_runner.go:130] > Change: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059113  649678 command_runner.go:130] >  Birth: 2025-10-06 14:21:49.037104102 +0000
	I1006 14:21:49.059133  649678 start.go:563] Will wait 60s for crictl version
	I1006 14:21:49.059181  649678 ssh_runner.go:195] Run: which crictl
	I1006 14:21:49.062689  649678 command_runner.go:130] > /usr/local/bin/crictl
	I1006 14:21:49.062764  649678 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:21:49.086605  649678 command_runner.go:130] > Version:  0.1.0
	I1006 14:21:49.086623  649678 command_runner.go:130] > RuntimeName:  cri-o
	I1006 14:21:49.086627  649678 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1006 14:21:49.086632  649678 command_runner.go:130] > RuntimeApiVersion:  v1
	I1006 14:21:49.088423  649678 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:21:49.088499  649678 ssh_runner.go:195] Run: crio --version
	I1006 14:21:49.118625  649678 command_runner.go:130] > crio version 1.34.1
	I1006 14:21:49.118652  649678 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1006 14:21:49.118659  649678 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1006 14:21:49.118666  649678 command_runner.go:130] >    GitTreeState:   dirty
	I1006 14:21:49.118672  649678 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1006 14:21:49.118678  649678 command_runner.go:130] >    GoVersion:      go1.24.6
	I1006 14:21:49.118683  649678 command_runner.go:130] >    Compiler:       gc
	I1006 14:21:49.118692  649678 command_runner.go:130] >    Platform:       linux/amd64
	I1006 14:21:49.118700  649678 command_runner.go:130] >    Linkmode:       static
	I1006 14:21:49.118708  649678 command_runner.go:130] >    BuildTags:
	I1006 14:21:49.118718  649678 command_runner.go:130] >      static
	I1006 14:21:49.118724  649678 command_runner.go:130] >      netgo
	I1006 14:21:49.118729  649678 command_runner.go:130] >      osusergo
	I1006 14:21:49.118739  649678 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1006 14:21:49.118745  649678 command_runner.go:130] >      seccomp
	I1006 14:21:49.118749  649678 command_runner.go:130] >      apparmor
	I1006 14:21:49.118753  649678 command_runner.go:130] >      selinux
	I1006 14:21:49.118757  649678 command_runner.go:130] >    LDFlags:          unknown
	I1006 14:21:49.118781  649678 command_runner.go:130] >    SeccompEnabled:   true
	I1006 14:21:49.118789  649678 command_runner.go:130] >    AppArmorEnabled:  false
	I1006 14:21:49.118869  649678 ssh_runner.go:195] Run: crio --version
	I1006 14:21:49.147173  649678 command_runner.go:130] > crio version 1.34.1
	I1006 14:21:49.147230  649678 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1006 14:21:49.147241  649678 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1006 14:21:49.147249  649678 command_runner.go:130] >    GitTreeState:   dirty
	I1006 14:21:49.147257  649678 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1006 14:21:49.147263  649678 command_runner.go:130] >    GoVersion:      go1.24.6
	I1006 14:21:49.147267  649678 command_runner.go:130] >    Compiler:       gc
	I1006 14:21:49.147283  649678 command_runner.go:130] >    Platform:       linux/amd64
	I1006 14:21:49.147292  649678 command_runner.go:130] >    Linkmode:       static
	I1006 14:21:49.147296  649678 command_runner.go:130] >    BuildTags:
	I1006 14:21:49.147299  649678 command_runner.go:130] >      static
	I1006 14:21:49.147303  649678 command_runner.go:130] >      netgo
	I1006 14:21:49.147309  649678 command_runner.go:130] >      osusergo
	I1006 14:21:49.147313  649678 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1006 14:21:49.147320  649678 command_runner.go:130] >      seccomp
	I1006 14:21:49.147324  649678 command_runner.go:130] >      apparmor
	I1006 14:21:49.147330  649678 command_runner.go:130] >      selinux
	I1006 14:21:49.147334  649678 command_runner.go:130] >    LDFlags:          unknown
	I1006 14:21:49.147340  649678 command_runner.go:130] >    SeccompEnabled:   true
	I1006 14:21:49.147443  649678 command_runner.go:130] >    AppArmorEnabled:  false
	I1006 14:21:49.149760  649678 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:21:49.150923  649678 cli_runner.go:164] Run: docker network inspect functional-135520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:21:49.168305  649678 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:21:49.172524  649678 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1006 14:21:49.172624  649678 kubeadm.go:883] updating cluster {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:21:49.172735  649678 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:21:49.172777  649678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:21:49.203555  649678 command_runner.go:130] > {
	I1006 14:21:49.203573  649678 command_runner.go:130] >   "images":  [
	I1006 14:21:49.203577  649678 command_runner.go:130] >     {
	I1006 14:21:49.203585  649678 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1006 14:21:49.203589  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203596  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1006 14:21:49.203599  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203603  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203613  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1006 14:21:49.203619  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1006 14:21:49.203623  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203628  649678 command_runner.go:130] >       "size":  "109379124",
	I1006 14:21:49.203634  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203641  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203647  649678 command_runner.go:130] >     },
	I1006 14:21:49.203650  649678 command_runner.go:130] >     {
	I1006 14:21:49.203656  649678 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1006 14:21:49.203660  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203665  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1006 14:21:49.203671  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203676  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203684  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1006 14:21:49.203694  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1006 14:21:49.203697  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203701  649678 command_runner.go:130] >       "size":  "31470524",
	I1006 14:21:49.203705  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203716  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203722  649678 command_runner.go:130] >     },
	I1006 14:21:49.203725  649678 command_runner.go:130] >     {
	I1006 14:21:49.203731  649678 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1006 14:21:49.203737  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203742  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1006 14:21:49.203748  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203752  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203759  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1006 14:21:49.203768  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1006 14:21:49.203771  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203775  649678 command_runner.go:130] >       "size":  "76103547",
	I1006 14:21:49.203779  649678 command_runner.go:130] >       "username":  "nonroot",
	I1006 14:21:49.203783  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203785  649678 command_runner.go:130] >     },
	I1006 14:21:49.203789  649678 command_runner.go:130] >     {
	I1006 14:21:49.203794  649678 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1006 14:21:49.203799  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203804  649678 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1006 14:21:49.203807  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203811  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203817  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1006 14:21:49.203826  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1006 14:21:49.203829  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203836  649678 command_runner.go:130] >       "size":  "195976448",
	I1006 14:21:49.203840  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.203844  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.203847  649678 command_runner.go:130] >       },
	I1006 14:21:49.203855  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203861  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203864  649678 command_runner.go:130] >     },
	I1006 14:21:49.203867  649678 command_runner.go:130] >     {
	I1006 14:21:49.203873  649678 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1006 14:21:49.203879  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203884  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1006 14:21:49.203887  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203891  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203901  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1006 14:21:49.203907  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1006 14:21:49.203913  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203916  649678 command_runner.go:130] >       "size":  "89046001",
	I1006 14:21:49.203920  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.203925  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.203928  649678 command_runner.go:130] >       },
	I1006 14:21:49.203931  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.203935  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.203938  649678 command_runner.go:130] >     },
	I1006 14:21:49.203941  649678 command_runner.go:130] >     {
	I1006 14:21:49.203947  649678 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1006 14:21:49.203953  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.203958  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1006 14:21:49.203961  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203965  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.203972  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1006 14:21:49.203981  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1006 14:21:49.203984  649678 command_runner.go:130] >       ],
	I1006 14:21:49.203988  649678 command_runner.go:130] >       "size":  "76004181",
	I1006 14:21:49.203992  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.203998  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.204001  649678 command_runner.go:130] >       },
	I1006 14:21:49.204005  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204011  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.204014  649678 command_runner.go:130] >     },
	I1006 14:21:49.204019  649678 command_runner.go:130] >     {
	I1006 14:21:49.204024  649678 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1006 14:21:49.204028  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.204033  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1006 14:21:49.204036  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204042  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.204055  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1006 14:21:49.204067  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1006 14:21:49.204073  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204078  649678 command_runner.go:130] >       "size":  "73138073",
	I1006 14:21:49.204081  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204085  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.204089  649678 command_runner.go:130] >     },
	I1006 14:21:49.204092  649678 command_runner.go:130] >     {
	I1006 14:21:49.204097  649678 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1006 14:21:49.204104  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.204108  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1006 14:21:49.204112  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204116  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.204123  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1006 14:21:49.204153  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1006 14:21:49.204160  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204164  649678 command_runner.go:130] >       "size":  "53844823",
	I1006 14:21:49.204167  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.204170  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.204174  649678 command_runner.go:130] >       },
	I1006 14:21:49.204178  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204183  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.204188  649678 command_runner.go:130] >     },
	I1006 14:21:49.204191  649678 command_runner.go:130] >     {
	I1006 14:21:49.204197  649678 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1006 14:21:49.204222  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.204230  649678 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1006 14:21:49.204237  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204243  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.204253  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1006 14:21:49.204260  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1006 14:21:49.204266  649678 command_runner.go:130] >       ],
	I1006 14:21:49.204269  649678 command_runner.go:130] >       "size":  "742092",
	I1006 14:21:49.204273  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.204277  649678 command_runner.go:130] >         "value":  "65535"
	I1006 14:21:49.204280  649678 command_runner.go:130] >       },
	I1006 14:21:49.204284  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.204288  649678 command_runner.go:130] >       "pinned":  true
	I1006 14:21:49.204291  649678 command_runner.go:130] >     }
	I1006 14:21:49.204294  649678 command_runner.go:130] >   ]
	I1006 14:21:49.204299  649678 command_runner.go:130] > }
	I1006 14:21:49.205550  649678 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:21:49.205570  649678 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:21:49.205618  649678 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:21:49.229611  649678 command_runner.go:130] > {
	I1006 14:21:49.229630  649678 command_runner.go:130] >   "images":  [
	I1006 14:21:49.229637  649678 command_runner.go:130] >     {
	I1006 14:21:49.229647  649678 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1006 14:21:49.229656  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.229664  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1006 14:21:49.229669  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229675  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.229690  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1006 14:21:49.229706  649678 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1006 14:21:49.229712  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229738  649678 command_runner.go:130] >       "size":  "109379124",
	I1006 14:21:49.229748  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.229755  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.229761  649678 command_runner.go:130] >     },
	I1006 14:21:49.229770  649678 command_runner.go:130] >     {
	I1006 14:21:49.229780  649678 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1006 14:21:49.229789  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.229799  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1006 14:21:49.229807  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229814  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.229830  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1006 14:21:49.229846  649678 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1006 14:21:49.229854  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229863  649678 command_runner.go:130] >       "size":  "31470524",
	I1006 14:21:49.229872  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.229894  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.229902  649678 command_runner.go:130] >     },
	I1006 14:21:49.229907  649678 command_runner.go:130] >     {
	I1006 14:21:49.229918  649678 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1006 14:21:49.229927  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.229936  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1006 14:21:49.229943  649678 command_runner.go:130] >       ],
	I1006 14:21:49.229951  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.229965  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1006 14:21:49.229980  649678 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1006 14:21:49.229999  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230007  649678 command_runner.go:130] >       "size":  "76103547",
	I1006 14:21:49.230016  649678 command_runner.go:130] >       "username":  "nonroot",
	I1006 14:21:49.230023  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230031  649678 command_runner.go:130] >     },
	I1006 14:21:49.230036  649678 command_runner.go:130] >     {
	I1006 14:21:49.230050  649678 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1006 14:21:49.230059  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230068  649678 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1006 14:21:49.230076  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230083  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230097  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1006 14:21:49.230112  649678 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1006 14:21:49.230119  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230127  649678 command_runner.go:130] >       "size":  "195976448",
	I1006 14:21:49.230135  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230143  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230152  649678 command_runner.go:130] >       },
	I1006 14:21:49.230165  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230175  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230181  649678 command_runner.go:130] >     },
	I1006 14:21:49.230189  649678 command_runner.go:130] >     {
	I1006 14:21:49.230220  649678 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1006 14:21:49.230239  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230249  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1006 14:21:49.230257  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230264  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230279  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1006 14:21:49.230306  649678 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1006 14:21:49.230314  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230321  649678 command_runner.go:130] >       "size":  "89046001",
	I1006 14:21:49.230329  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230336  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230345  649678 command_runner.go:130] >       },
	I1006 14:21:49.230352  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230361  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230367  649678 command_runner.go:130] >     },
	I1006 14:21:49.230375  649678 command_runner.go:130] >     {
	I1006 14:21:49.230386  649678 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1006 14:21:49.230395  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230406  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1006 14:21:49.230414  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230421  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230436  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1006 14:21:49.230451  649678 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1006 14:21:49.230460  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230467  649678 command_runner.go:130] >       "size":  "76004181",
	I1006 14:21:49.230484  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230493  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230500  649678 command_runner.go:130] >       },
	I1006 14:21:49.230507  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230516  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230523  649678 command_runner.go:130] >     },
	I1006 14:21:49.230529  649678 command_runner.go:130] >     {
	I1006 14:21:49.230542  649678 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1006 14:21:49.230549  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230568  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1006 14:21:49.230576  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230583  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230599  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1006 14:21:49.230614  649678 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1006 14:21:49.230621  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230628  649678 command_runner.go:130] >       "size":  "73138073",
	I1006 14:21:49.230637  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230645  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230653  649678 command_runner.go:130] >     },
	I1006 14:21:49.230658  649678 command_runner.go:130] >     {
	I1006 14:21:49.230665  649678 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1006 14:21:49.230670  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230679  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1006 14:21:49.230687  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230693  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230706  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1006 14:21:49.230734  649678 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1006 14:21:49.230745  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230751  649678 command_runner.go:130] >       "size":  "53844823",
	I1006 14:21:49.230758  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230767  649678 command_runner.go:130] >         "value":  "0"
	I1006 14:21:49.230773  649678 command_runner.go:130] >       },
	I1006 14:21:49.230783  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230791  649678 command_runner.go:130] >       "pinned":  false
	I1006 14:21:49.230799  649678 command_runner.go:130] >     },
	I1006 14:21:49.230805  649678 command_runner.go:130] >     {
	I1006 14:21:49.230819  649678 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1006 14:21:49.230828  649678 command_runner.go:130] >       "repoTags":  [
	I1006 14:21:49.230837  649678 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1006 14:21:49.230845  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230852  649678 command_runner.go:130] >       "repoDigests":  [
	I1006 14:21:49.230865  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1006 14:21:49.230878  649678 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1006 14:21:49.230887  649678 command_runner.go:130] >       ],
	I1006 14:21:49.230894  649678 command_runner.go:130] >       "size":  "742092",
	I1006 14:21:49.230902  649678 command_runner.go:130] >       "uid":  {
	I1006 14:21:49.230909  649678 command_runner.go:130] >         "value":  "65535"
	I1006 14:21:49.230918  649678 command_runner.go:130] >       },
	I1006 14:21:49.230924  649678 command_runner.go:130] >       "username":  "",
	I1006 14:21:49.230934  649678 command_runner.go:130] >       "pinned":  true
	I1006 14:21:49.230940  649678 command_runner.go:130] >     }
	I1006 14:21:49.230948  649678 command_runner.go:130] >   ]
	I1006 14:21:49.230953  649678 command_runner.go:130] > }
	I1006 14:21:49.231845  649678 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:21:49.231866  649678 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:21:49.231873  649678 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1006 14:21:49.232021  649678 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-135520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:21:49.232106  649678 ssh_runner.go:195] Run: crio config
	I1006 14:21:49.273258  649678 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1006 14:21:49.273298  649678 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1006 14:21:49.273306  649678 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1006 14:21:49.273309  649678 command_runner.go:130] > #
	I1006 14:21:49.273321  649678 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1006 14:21:49.273332  649678 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1006 14:21:49.273343  649678 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1006 14:21:49.273357  649678 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1006 14:21:49.273367  649678 command_runner.go:130] > # reload'.
	I1006 14:21:49.273377  649678 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1006 14:21:49.273389  649678 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1006 14:21:49.273403  649678 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1006 14:21:49.273413  649678 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1006 14:21:49.273423  649678 command_runner.go:130] > [crio]
	I1006 14:21:49.273433  649678 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1006 14:21:49.273446  649678 command_runner.go:130] > # containers images, in this directory.
	I1006 14:21:49.273471  649678 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1006 14:21:49.273486  649678 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1006 14:21:49.273494  649678 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1006 14:21:49.273512  649678 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1006 14:21:49.273525  649678 command_runner.go:130] > # imagestore = ""
	I1006 14:21:49.273535  649678 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1006 14:21:49.273548  649678 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1006 14:21:49.273561  649678 command_runner.go:130] > # storage_driver = "overlay"
	I1006 14:21:49.273574  649678 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1006 14:21:49.273591  649678 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1006 14:21:49.273599  649678 command_runner.go:130] > # storage_option = [
	I1006 14:21:49.273613  649678 command_runner.go:130] > # ]
	I1006 14:21:49.273623  649678 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1006 14:21:49.273635  649678 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1006 14:21:49.273642  649678 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1006 14:21:49.273652  649678 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1006 14:21:49.273664  649678 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1006 14:21:49.273678  649678 command_runner.go:130] > # always happen on a node reboot
	I1006 14:21:49.273690  649678 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1006 14:21:49.273712  649678 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1006 14:21:49.273725  649678 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1006 14:21:49.273743  649678 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1006 14:21:49.273751  649678 command_runner.go:130] > # version_file_persist = ""
	I1006 14:21:49.273764  649678 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1006 14:21:49.273781  649678 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1006 14:21:49.273792  649678 command_runner.go:130] > # internal_wipe = true
	I1006 14:21:49.273806  649678 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1006 14:21:49.273819  649678 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1006 14:21:49.273829  649678 command_runner.go:130] > # internal_repair = true
	I1006 14:21:49.273842  649678 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1006 14:21:49.273856  649678 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1006 14:21:49.273870  649678 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1006 14:21:49.273880  649678 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1006 14:21:49.273894  649678 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1006 14:21:49.273901  649678 command_runner.go:130] > [crio.api]
	I1006 14:21:49.273915  649678 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1006 14:21:49.273926  649678 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1006 14:21:49.273935  649678 command_runner.go:130] > # IP address on which the stream server will listen.
	I1006 14:21:49.273947  649678 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1006 14:21:49.273963  649678 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1006 14:21:49.273975  649678 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1006 14:21:49.273987  649678 command_runner.go:130] > # stream_port = "0"
	I1006 14:21:49.274002  649678 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1006 14:21:49.274013  649678 command_runner.go:130] > # stream_enable_tls = false
	I1006 14:21:49.274023  649678 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1006 14:21:49.274035  649678 command_runner.go:130] > # stream_idle_timeout = ""
	I1006 14:21:49.274045  649678 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1006 14:21:49.274059  649678 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1006 14:21:49.274068  649678 command_runner.go:130] > # stream_tls_cert = ""
	I1006 14:21:49.274083  649678 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1006 14:21:49.274109  649678 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1006 14:21:49.274132  649678 command_runner.go:130] > # stream_tls_key = ""
	I1006 14:21:49.274143  649678 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1006 14:21:49.274153  649678 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1006 14:21:49.274162  649678 command_runner.go:130] > # automatically pick up the changes.
	I1006 14:21:49.274173  649678 command_runner.go:130] > # stream_tls_ca = ""
	I1006 14:21:49.274218  649678 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1006 14:21:49.274233  649678 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1006 14:21:49.274245  649678 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1006 14:21:49.274257  649678 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1006 14:21:49.274268  649678 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1006 14:21:49.274281  649678 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1006 14:21:49.274293  649678 command_runner.go:130] > [crio.runtime]
	I1006 14:21:49.274303  649678 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1006 14:21:49.274315  649678 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1006 14:21:49.274325  649678 command_runner.go:130] > # "nofile=1024:2048"
	I1006 14:21:49.274336  649678 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1006 14:21:49.274347  649678 command_runner.go:130] > # default_ulimits = [
	I1006 14:21:49.274353  649678 command_runner.go:130] > # ]
	I1006 14:21:49.274363  649678 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1006 14:21:49.274374  649678 command_runner.go:130] > # no_pivot = false
	I1006 14:21:49.274384  649678 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1006 14:21:49.274399  649678 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1006 14:21:49.274410  649678 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1006 14:21:49.274425  649678 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1006 14:21:49.274437  649678 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1006 14:21:49.274453  649678 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1006 14:21:49.274464  649678 command_runner.go:130] > # conmon = ""
	I1006 14:21:49.274473  649678 command_runner.go:130] > # Cgroup setting for conmon
	I1006 14:21:49.274487  649678 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1006 14:21:49.274498  649678 command_runner.go:130] > conmon_cgroup = "pod"
	I1006 14:21:49.274508  649678 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1006 14:21:49.274520  649678 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1006 14:21:49.274533  649678 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1006 14:21:49.274545  649678 command_runner.go:130] > # conmon_env = [
	I1006 14:21:49.274559  649678 command_runner.go:130] > # ]
	I1006 14:21:49.274566  649678 command_runner.go:130] > # Additional environment variables to set for all the
	I1006 14:21:49.274574  649678 command_runner.go:130] > # containers. These are overridden if set in the
	I1006 14:21:49.274583  649678 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1006 14:21:49.274593  649678 command_runner.go:130] > # default_env = [
	I1006 14:21:49.274599  649678 command_runner.go:130] > # ]
	I1006 14:21:49.274610  649678 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1006 14:21:49.274625  649678 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1006 14:21:49.274633  649678 command_runner.go:130] > # selinux = false
	I1006 14:21:49.274646  649678 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1006 14:21:49.274658  649678 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1006 14:21:49.274677  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274687  649678 command_runner.go:130] > # seccomp_profile = ""
	I1006 14:21:49.274698  649678 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1006 14:21:49.274707  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274715  649678 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1006 14:21:49.274733  649678 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1006 14:21:49.274744  649678 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1006 14:21:49.274754  649678 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1006 14:21:49.274768  649678 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1006 14:21:49.274776  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274784  649678 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1006 14:21:49.274794  649678 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1006 14:21:49.274802  649678 command_runner.go:130] > # the cgroup blockio controller.
	I1006 14:21:49.274809  649678 command_runner.go:130] > # blockio_config_file = ""
	I1006 14:21:49.274820  649678 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1006 14:21:49.274828  649678 command_runner.go:130] > # blockio parameters.
	I1006 14:21:49.274840  649678 command_runner.go:130] > # blockio_reload = false
	I1006 14:21:49.274849  649678 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1006 14:21:49.274856  649678 command_runner.go:130] > # irqbalance daemon.
	I1006 14:21:49.274870  649678 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1006 14:21:49.274886  649678 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1006 14:21:49.274901  649678 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1006 14:21:49.274915  649678 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1006 14:21:49.274927  649678 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1006 14:21:49.274933  649678 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1006 14:21:49.274941  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.274945  649678 command_runner.go:130] > # rdt_config_file = ""
	I1006 14:21:49.274950  649678 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1006 14:21:49.274955  649678 command_runner.go:130] > # cgroup_manager = "systemd"
	I1006 14:21:49.274962  649678 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1006 14:21:49.274968  649678 command_runner.go:130] > # separate_pull_cgroup = ""
	I1006 14:21:49.274974  649678 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1006 14:21:49.274982  649678 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1006 14:21:49.274986  649678 command_runner.go:130] > # will be added.
	I1006 14:21:49.274991  649678 command_runner.go:130] > # default_capabilities = [
	I1006 14:21:49.274994  649678 command_runner.go:130] > # 	"CHOWN",
	I1006 14:21:49.274998  649678 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1006 14:21:49.275001  649678 command_runner.go:130] > # 	"FSETID",
	I1006 14:21:49.275004  649678 command_runner.go:130] > # 	"FOWNER",
	I1006 14:21:49.275008  649678 command_runner.go:130] > # 	"SETGID",
	I1006 14:21:49.275026  649678 command_runner.go:130] > # 	"SETUID",
	I1006 14:21:49.275033  649678 command_runner.go:130] > # 	"SETPCAP",
	I1006 14:21:49.275037  649678 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1006 14:21:49.275040  649678 command_runner.go:130] > # 	"KILL",
	I1006 14:21:49.275043  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275051  649678 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1006 14:21:49.275059  649678 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1006 14:21:49.275064  649678 command_runner.go:130] > # add_inheritable_capabilities = false
	I1006 14:21:49.275071  649678 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1006 14:21:49.275077  649678 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1006 14:21:49.275083  649678 command_runner.go:130] > default_sysctls = [
	I1006 14:21:49.275087  649678 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1006 14:21:49.275090  649678 command_runner.go:130] > ]
	I1006 14:21:49.275096  649678 command_runner.go:130] > # List of devices on the host that a
	I1006 14:21:49.275104  649678 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1006 14:21:49.275109  649678 command_runner.go:130] > # allowed_devices = [
	I1006 14:21:49.275122  649678 command_runner.go:130] > # 	"/dev/fuse",
	I1006 14:21:49.275128  649678 command_runner.go:130] > # 	"/dev/net/tun",
	I1006 14:21:49.275132  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275136  649678 command_runner.go:130] > # List of additional devices. specified as
	I1006 14:21:49.275146  649678 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1006 14:21:49.275151  649678 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1006 14:21:49.275156  649678 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1006 14:21:49.275162  649678 command_runner.go:130] > # additional_devices = [
	I1006 14:21:49.275166  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275170  649678 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1006 14:21:49.275176  649678 command_runner.go:130] > # cdi_spec_dirs = [
	I1006 14:21:49.275180  649678 command_runner.go:130] > # 	"/etc/cdi",
	I1006 14:21:49.275184  649678 command_runner.go:130] > # 	"/var/run/cdi",
	I1006 14:21:49.275189  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275195  649678 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1006 14:21:49.275216  649678 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1006 14:21:49.275225  649678 command_runner.go:130] > # Defaults to false.
	I1006 14:21:49.275239  649678 command_runner.go:130] > # device_ownership_from_security_context = false
	I1006 14:21:49.275249  649678 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1006 14:21:49.275255  649678 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1006 14:21:49.275262  649678 command_runner.go:130] > # hooks_dir = [
	I1006 14:21:49.275267  649678 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1006 14:21:49.275273  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275278  649678 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1006 14:21:49.275284  649678 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1006 14:21:49.275292  649678 command_runner.go:130] > # its default mounts from the following two files:
	I1006 14:21:49.275295  649678 command_runner.go:130] > #
	I1006 14:21:49.275300  649678 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1006 14:21:49.275309  649678 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1006 14:21:49.275315  649678 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1006 14:21:49.275328  649678 command_runner.go:130] > #
	I1006 14:21:49.275338  649678 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1006 14:21:49.275345  649678 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1006 14:21:49.275353  649678 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1006 14:21:49.275358  649678 command_runner.go:130] > #      only add mounts it finds in this file.
	I1006 14:21:49.275364  649678 command_runner.go:130] > #
	I1006 14:21:49.275370  649678 command_runner.go:130] > # default_mounts_file = ""
	I1006 14:21:49.275378  649678 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1006 14:21:49.275385  649678 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1006 14:21:49.275391  649678 command_runner.go:130] > # pids_limit = -1
	I1006 14:21:49.275398  649678 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1006 14:21:49.275406  649678 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1006 14:21:49.275412  649678 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1006 14:21:49.275420  649678 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1006 14:21:49.275426  649678 command_runner.go:130] > # log_size_max = -1
	I1006 14:21:49.275433  649678 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1006 14:21:49.275439  649678 command_runner.go:130] > # log_to_journald = false
	I1006 14:21:49.275445  649678 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1006 14:21:49.275452  649678 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1006 14:21:49.275457  649678 command_runner.go:130] > # Path to directory for container attach sockets.
	I1006 14:21:49.275463  649678 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1006 14:21:49.275467  649678 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1006 14:21:49.275474  649678 command_runner.go:130] > # bind_mount_prefix = ""
	I1006 14:21:49.275479  649678 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1006 14:21:49.275485  649678 command_runner.go:130] > # read_only = false
	I1006 14:21:49.275491  649678 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1006 14:21:49.275497  649678 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1006 14:21:49.275504  649678 command_runner.go:130] > # live configuration reload.
	I1006 14:21:49.275508  649678 command_runner.go:130] > # log_level = "info"
	I1006 14:21:49.275513  649678 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1006 14:21:49.275521  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.275525  649678 command_runner.go:130] > # log_filter = ""
	I1006 14:21:49.275530  649678 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1006 14:21:49.275542  649678 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1006 14:21:49.275549  649678 command_runner.go:130] > # separated by comma.
	I1006 14:21:49.275557  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275563  649678 command_runner.go:130] > # uid_mappings = ""
	I1006 14:21:49.275569  649678 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1006 14:21:49.275577  649678 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1006 14:21:49.275585  649678 command_runner.go:130] > # separated by comma.
	I1006 14:21:49.275594  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275598  649678 command_runner.go:130] > # gid_mappings = ""
	I1006 14:21:49.275606  649678 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1006 14:21:49.275614  649678 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1006 14:21:49.275621  649678 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1006 14:21:49.275630  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275634  649678 command_runner.go:130] > # minimum_mappable_uid = -1
	I1006 14:21:49.275640  649678 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1006 14:21:49.275648  649678 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1006 14:21:49.275654  649678 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1006 14:21:49.275664  649678 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1006 14:21:49.275668  649678 command_runner.go:130] > # minimum_mappable_gid = -1
	I1006 14:21:49.275676  649678 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1006 14:21:49.275683  649678 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1006 14:21:49.275690  649678 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1006 14:21:49.275694  649678 command_runner.go:130] > # ctr_stop_timeout = 30
	I1006 14:21:49.275700  649678 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1006 14:21:49.275706  649678 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1006 14:21:49.275711  649678 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1006 14:21:49.275718  649678 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1006 14:21:49.275722  649678 command_runner.go:130] > # drop_infra_ctr = true
	I1006 14:21:49.275731  649678 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1006 14:21:49.275736  649678 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1006 14:21:49.275746  649678 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1006 14:21:49.275752  649678 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1006 14:21:49.275759  649678 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1006 14:21:49.275772  649678 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1006 14:21:49.275778  649678 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1006 14:21:49.275786  649678 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1006 14:21:49.275790  649678 command_runner.go:130] > # shared_cpuset = ""
	I1006 14:21:49.275800  649678 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1006 14:21:49.275805  649678 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1006 14:21:49.275811  649678 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1006 14:21:49.275817  649678 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1006 14:21:49.275824  649678 command_runner.go:130] > # pinns_path = ""
	I1006 14:21:49.275829  649678 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1006 14:21:49.275838  649678 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1006 14:21:49.275842  649678 command_runner.go:130] > # enable_criu_support = true
	I1006 14:21:49.275849  649678 command_runner.go:130] > # Enable/disable the generation of the container,
	I1006 14:21:49.275855  649678 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1006 14:21:49.275859  649678 command_runner.go:130] > # enable_pod_events = false
	I1006 14:21:49.275865  649678 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1006 14:21:49.275872  649678 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1006 14:21:49.275876  649678 command_runner.go:130] > # default_runtime = "crun"
	I1006 14:21:49.275880  649678 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1006 14:21:49.275887  649678 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1006 14:21:49.275898  649678 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1006 14:21:49.275906  649678 command_runner.go:130] > # creation as a file is not desired either.
	I1006 14:21:49.275914  649678 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1006 14:21:49.275921  649678 command_runner.go:130] > # the hostname is being managed dynamically.
	I1006 14:21:49.275925  649678 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1006 14:21:49.275930  649678 command_runner.go:130] > # ]
	I1006 14:21:49.275936  649678 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1006 14:21:49.275945  649678 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1006 14:21:49.275951  649678 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1006 14:21:49.275955  649678 command_runner.go:130] > # Each entry in the table should follow the format:
	I1006 14:21:49.275961  649678 command_runner.go:130] > #
	I1006 14:21:49.275965  649678 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1006 14:21:49.275969  649678 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1006 14:21:49.275980  649678 command_runner.go:130] > # runtime_type = "oci"
	I1006 14:21:49.275988  649678 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1006 14:21:49.275993  649678 command_runner.go:130] > # inherit_default_runtime = false
	I1006 14:21:49.275997  649678 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1006 14:21:49.276002  649678 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1006 14:21:49.276009  649678 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1006 14:21:49.276013  649678 command_runner.go:130] > # monitor_env = []
	I1006 14:21:49.276020  649678 command_runner.go:130] > # privileged_without_host_devices = false
	I1006 14:21:49.276024  649678 command_runner.go:130] > # allowed_annotations = []
	I1006 14:21:49.276029  649678 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1006 14:21:49.276035  649678 command_runner.go:130] > # no_sync_log = false
	I1006 14:21:49.276039  649678 command_runner.go:130] > # default_annotations = {}
	I1006 14:21:49.276044  649678 command_runner.go:130] > # stream_websockets = false
	I1006 14:21:49.276052  649678 command_runner.go:130] > # seccomp_profile = ""
	I1006 14:21:49.276074  649678 command_runner.go:130] > # Where:
	I1006 14:21:49.276087  649678 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1006 14:21:49.276100  649678 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1006 14:21:49.276111  649678 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1006 14:21:49.276124  649678 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1006 14:21:49.276128  649678 command_runner.go:130] > #   in $PATH.
	I1006 14:21:49.276137  649678 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1006 14:21:49.276141  649678 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1006 14:21:49.276149  649678 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1006 14:21:49.276153  649678 command_runner.go:130] > #   state.
	I1006 14:21:49.276159  649678 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1006 14:21:49.276165  649678 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1006 14:21:49.276173  649678 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1006 14:21:49.276179  649678 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1006 14:21:49.276186  649678 command_runner.go:130] > #   the values from the default runtime on load time.
	I1006 14:21:49.276193  649678 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1006 14:21:49.276200  649678 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1006 14:21:49.276242  649678 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1006 14:21:49.276258  649678 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1006 14:21:49.276269  649678 command_runner.go:130] > #   The currently recognized values are:
	I1006 14:21:49.276276  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1006 14:21:49.276286  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1006 14:21:49.276294  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1006 14:21:49.276300  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1006 14:21:49.276308  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1006 14:21:49.276314  649678 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1006 14:21:49.276323  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1006 14:21:49.276330  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1006 14:21:49.276338  649678 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1006 14:21:49.276344  649678 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1006 14:21:49.276353  649678 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1006 14:21:49.276359  649678 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1006 14:21:49.276370  649678 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1006 14:21:49.276380  649678 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1006 14:21:49.276386  649678 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1006 14:21:49.276396  649678 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1006 14:21:49.276402  649678 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1006 14:21:49.276409  649678 command_runner.go:130] > #   deprecated option "conmon".
	I1006 14:21:49.276416  649678 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1006 14:21:49.276423  649678 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1006 14:21:49.276429  649678 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1006 14:21:49.276437  649678 command_runner.go:130] > #   should be moved to the container's cgroup
	I1006 14:21:49.276444  649678 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1006 14:21:49.276451  649678 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1006 14:21:49.276459  649678 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1006 14:21:49.276465  649678 command_runner.go:130] > #   conmon-rs by using:
	I1006 14:21:49.276472  649678 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1006 14:21:49.276481  649678 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1006 14:21:49.276488  649678 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1006 14:21:49.276494  649678 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1006 14:21:49.276502  649678 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1006 14:21:49.276509  649678 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1006 14:21:49.276519  649678 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1006 14:21:49.276524  649678 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1006 14:21:49.276534  649678 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1006 14:21:49.276543  649678 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1006 14:21:49.276551  649678 command_runner.go:130] > #   when a machine crash happens.
	I1006 14:21:49.276558  649678 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1006 14:21:49.276568  649678 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1006 14:21:49.276576  649678 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1006 14:21:49.276583  649678 command_runner.go:130] > #   seccomp profile for the runtime.
	I1006 14:21:49.276589  649678 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1006 14:21:49.276598  649678 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1006 14:21:49.276601  649678 command_runner.go:130] > #
	I1006 14:21:49.276605  649678 command_runner.go:130] > # Using the seccomp notifier feature:
	I1006 14:21:49.276610  649678 command_runner.go:130] > #
	I1006 14:21:49.276617  649678 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1006 14:21:49.276626  649678 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1006 14:21:49.276629  649678 command_runner.go:130] > #
	I1006 14:21:49.276635  649678 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1006 14:21:49.276643  649678 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1006 14:21:49.276646  649678 command_runner.go:130] > #
	I1006 14:21:49.276655  649678 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1006 14:21:49.276664  649678 command_runner.go:130] > # feature.
	I1006 14:21:49.276670  649678 command_runner.go:130] > #
	I1006 14:21:49.276684  649678 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1006 14:21:49.276693  649678 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1006 14:21:49.276700  649678 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1006 14:21:49.276708  649678 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1006 14:21:49.276714  649678 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1006 14:21:49.276720  649678 command_runner.go:130] > #
	I1006 14:21:49.276726  649678 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1006 14:21:49.276734  649678 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1006 14:21:49.276737  649678 command_runner.go:130] > #
	I1006 14:21:49.276745  649678 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1006 14:21:49.276765  649678 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1006 14:21:49.276775  649678 command_runner.go:130] > #
	I1006 14:21:49.276785  649678 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1006 14:21:49.276795  649678 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1006 14:21:49.276798  649678 command_runner.go:130] > # limitation.
	I1006 14:21:49.276802  649678 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1006 14:21:49.276807  649678 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1006 14:21:49.276815  649678 command_runner.go:130] > runtime_type = ""
	I1006 14:21:49.276822  649678 command_runner.go:130] > runtime_root = "/run/crun"
	I1006 14:21:49.276833  649678 command_runner.go:130] > inherit_default_runtime = false
	I1006 14:21:49.276841  649678 command_runner.go:130] > runtime_config_path = ""
	I1006 14:21:49.276851  649678 command_runner.go:130] > container_min_memory = ""
	I1006 14:21:49.276860  649678 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1006 14:21:49.276871  649678 command_runner.go:130] > monitor_cgroup = "pod"
	I1006 14:21:49.276877  649678 command_runner.go:130] > monitor_exec_cgroup = ""
	I1006 14:21:49.276883  649678 command_runner.go:130] > allowed_annotations = [
	I1006 14:21:49.276890  649678 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1006 14:21:49.276896  649678 command_runner.go:130] > ]
	I1006 14:21:49.276902  649678 command_runner.go:130] > privileged_without_host_devices = false
	I1006 14:21:49.276909  649678 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1006 14:21:49.276916  649678 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1006 14:21:49.276922  649678 command_runner.go:130] > runtime_type = ""
	I1006 14:21:49.276929  649678 command_runner.go:130] > runtime_root = "/run/runc"
	I1006 14:21:49.276936  649678 command_runner.go:130] > inherit_default_runtime = false
	I1006 14:21:49.276946  649678 command_runner.go:130] > runtime_config_path = ""
	I1006 14:21:49.276954  649678 command_runner.go:130] > container_min_memory = ""
	I1006 14:21:49.276967  649678 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1006 14:21:49.276978  649678 command_runner.go:130] > monitor_cgroup = "pod"
	I1006 14:21:49.276984  649678 command_runner.go:130] > monitor_exec_cgroup = ""
	I1006 14:21:49.276991  649678 command_runner.go:130] > privileged_without_host_devices = false
	I1006 14:21:49.276998  649678 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1006 14:21:49.277005  649678 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1006 14:21:49.277012  649678 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1006 14:21:49.277036  649678 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1006 14:21:49.277057  649678 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1006 14:21:49.277077  649678 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1006 14:21:49.277093  649678 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1006 14:21:49.277104  649678 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1006 14:21:49.277125  649678 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1006 14:21:49.277141  649678 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1006 14:21:49.277151  649678 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1006 14:21:49.277167  649678 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1006 14:21:49.277177  649678 command_runner.go:130] > # Example:
	I1006 14:21:49.277189  649678 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1006 14:21:49.277201  649678 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1006 14:21:49.277225  649678 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1006 14:21:49.277238  649678 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1006 14:21:49.277249  649678 command_runner.go:130] > # cpuset = "0-1"
	I1006 14:21:49.277260  649678 command_runner.go:130] > # cpushares = "5"
	I1006 14:21:49.277270  649678 command_runner.go:130] > # cpuquota = "1000"
	I1006 14:21:49.277281  649678 command_runner.go:130] > # cpuperiod = "100000"
	I1006 14:21:49.277292  649678 command_runner.go:130] > # cpulimit = "35"
	I1006 14:21:49.277300  649678 command_runner.go:130] > # Where:
	I1006 14:21:49.277307  649678 command_runner.go:130] > # The workload name is workload-type.
	I1006 14:21:49.277323  649678 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1006 14:21:49.277336  649678 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1006 14:21:49.277349  649678 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1006 14:21:49.277366  649678 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1006 14:21:49.277381  649678 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1006 14:21:49.277393  649678 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1006 14:21:49.277406  649678 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1006 14:21:49.277416  649678 command_runner.go:130] > # Default value is set to true
	I1006 14:21:49.277427  649678 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1006 14:21:49.277441  649678 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1006 14:21:49.277453  649678 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1006 14:21:49.277465  649678 command_runner.go:130] > # Default value is set to 'false'
	I1006 14:21:49.277479  649678 command_runner.go:130] > # disable_hostport_mapping = false
	I1006 14:21:49.277492  649678 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1006 14:21:49.277513  649678 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1006 14:21:49.277521  649678 command_runner.go:130] > # timezone = ""
	I1006 14:21:49.277531  649678 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1006 14:21:49.277536  649678 command_runner.go:130] > #
	I1006 14:21:49.277547  649678 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1006 14:21:49.277557  649678 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1006 14:21:49.277565  649678 command_runner.go:130] > [crio.image]
	I1006 14:21:49.277578  649678 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1006 14:21:49.277589  649678 command_runner.go:130] > # default_transport = "docker://"
	I1006 14:21:49.277603  649678 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1006 14:21:49.277617  649678 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1006 14:21:49.277627  649678 command_runner.go:130] > # global_auth_file = ""
	I1006 14:21:49.277652  649678 command_runner.go:130] > # The image used to instantiate infra containers.
	I1006 14:21:49.277665  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.277675  649678 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1006 14:21:49.277690  649678 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1006 14:21:49.277704  649678 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1006 14:21:49.277715  649678 command_runner.go:130] > # This option supports live configuration reload.
	I1006 14:21:49.277730  649678 command_runner.go:130] > # pause_image_auth_file = ""
	I1006 14:21:49.277741  649678 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1006 14:21:49.277755  649678 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1006 14:21:49.277770  649678 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1006 14:21:49.277785  649678 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1006 14:21:49.277796  649678 command_runner.go:130] > # pause_command = "/pause"
	I1006 14:21:49.277811  649678 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1006 14:21:49.277824  649678 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1006 14:21:49.277838  649678 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1006 14:21:49.277851  649678 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1006 14:21:49.277864  649678 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1006 14:21:49.277879  649678 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1006 14:21:49.277889  649678 command_runner.go:130] > # pinned_images = [
	I1006 14:21:49.277904  649678 command_runner.go:130] > # ]
	I1006 14:21:49.277918  649678 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1006 14:21:49.277929  649678 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1006 14:21:49.277943  649678 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1006 14:21:49.277957  649678 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1006 14:21:49.277969  649678 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1006 14:21:49.277982  649678 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1006 14:21:49.277994  649678 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1006 14:21:49.278013  649678 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1006 14:21:49.278025  649678 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1006 14:21:49.278042  649678 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1006 14:21:49.278056  649678 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1006 14:21:49.278069  649678 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1006 14:21:49.278083  649678 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1006 14:21:49.278099  649678 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1006 14:21:49.278109  649678 command_runner.go:130] > # changing them here.
	I1006 14:21:49.278127  649678 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1006 14:21:49.278138  649678 command_runner.go:130] > # insecure_registries = [
	I1006 14:21:49.278148  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278163  649678 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1006 14:21:49.278181  649678 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1006 14:21:49.278192  649678 command_runner.go:130] > # image_volumes = "mkdir"
	I1006 14:21:49.278214  649678 command_runner.go:130] > # Temporary directory to use for storing big files
	I1006 14:21:49.278227  649678 command_runner.go:130] > # big_files_temporary_dir = ""
	I1006 14:21:49.278237  649678 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1006 14:21:49.278253  649678 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1006 14:21:49.278265  649678 command_runner.go:130] > # auto_reload_registries = false
	I1006 14:21:49.278278  649678 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1006 14:21:49.278294  649678 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1006 14:21:49.278307  649678 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1006 14:21:49.278317  649678 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1006 14:21:49.278329  649678 command_runner.go:130] > # The mode of short name resolution.
	I1006 14:21:49.278343  649678 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1006 14:21:49.278364  649678 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1006 14:21:49.278377  649678 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1006 14:21:49.278389  649678 command_runner.go:130] > # short_name_mode = "enforcing"
	I1006 14:21:49.278403  649678 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1006 14:21:49.278414  649678 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1006 14:21:49.278425  649678 command_runner.go:130] > # oci_artifact_mount_support = true
	I1006 14:21:49.278440  649678 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1006 14:21:49.278450  649678 command_runner.go:130] > # CNI plugins.
	I1006 14:21:49.278460  649678 command_runner.go:130] > [crio.network]
	I1006 14:21:49.278474  649678 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1006 14:21:49.278486  649678 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1006 14:21:49.278497  649678 command_runner.go:130] > # cni_default_network = ""
	I1006 14:21:49.278508  649678 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1006 14:21:49.278519  649678 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1006 14:21:49.278532  649678 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1006 14:21:49.278543  649678 command_runner.go:130] > # plugin_dirs = [
	I1006 14:21:49.278554  649678 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1006 14:21:49.278563  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278574  649678 command_runner.go:130] > # List of included pod metrics.
	I1006 14:21:49.278586  649678 command_runner.go:130] > # included_pod_metrics = [
	I1006 14:21:49.278594  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278605  649678 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1006 14:21:49.278615  649678 command_runner.go:130] > [crio.metrics]
	I1006 14:21:49.278627  649678 command_runner.go:130] > # Globally enable or disable metrics support.
	I1006 14:21:49.278639  649678 command_runner.go:130] > # enable_metrics = false
	I1006 14:21:49.278651  649678 command_runner.go:130] > # Specify enabled metrics collectors.
	I1006 14:21:49.278662  649678 command_runner.go:130] > # Per default all metrics are enabled.
	I1006 14:21:49.278676  649678 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1006 14:21:49.278689  649678 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1006 14:21:49.278700  649678 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1006 14:21:49.278712  649678 command_runner.go:130] > # metrics_collectors = [
	I1006 14:21:49.278718  649678 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1006 14:21:49.278727  649678 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1006 14:21:49.278740  649678 command_runner.go:130] > # 	"containers_oom_total",
	I1006 14:21:49.278747  649678 command_runner.go:130] > # 	"processes_defunct",
	I1006 14:21:49.278754  649678 command_runner.go:130] > # 	"operations_total",
	I1006 14:21:49.278761  649678 command_runner.go:130] > # 	"operations_latency_seconds",
	I1006 14:21:49.278769  649678 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1006 14:21:49.278776  649678 command_runner.go:130] > # 	"operations_errors_total",
	I1006 14:21:49.278786  649678 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1006 14:21:49.278798  649678 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1006 14:21:49.278810  649678 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1006 14:21:49.278822  649678 command_runner.go:130] > # 	"image_pulls_success_total",
	I1006 14:21:49.278833  649678 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1006 14:21:49.278844  649678 command_runner.go:130] > # 	"containers_oom_count_total",
	I1006 14:21:49.278856  649678 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1006 14:21:49.278867  649678 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1006 14:21:49.278878  649678 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1006 14:21:49.278886  649678 command_runner.go:130] > # ]
	I1006 14:21:49.278896  649678 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1006 14:21:49.278907  649678 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1006 14:21:49.278916  649678 command_runner.go:130] > # The port on which the metrics server will listen.
	I1006 14:21:49.278927  649678 command_runner.go:130] > # metrics_port = 9090
	I1006 14:21:49.278939  649678 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1006 14:21:49.278950  649678 command_runner.go:130] > # metrics_socket = ""
	I1006 14:21:49.278962  649678 command_runner.go:130] > # The certificate for the secure metrics server.
	I1006 14:21:49.278975  649678 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1006 14:21:49.278986  649678 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1006 14:21:49.278998  649678 command_runner.go:130] > # certificate on any modification event.
	I1006 14:21:49.279009  649678 command_runner.go:130] > # metrics_cert = ""
	I1006 14:21:49.279018  649678 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1006 14:21:49.279031  649678 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1006 14:21:49.279042  649678 command_runner.go:130] > # metrics_key = ""
	I1006 14:21:49.279054  649678 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1006 14:21:49.279065  649678 command_runner.go:130] > [crio.tracing]
	I1006 14:21:49.279078  649678 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1006 14:21:49.279088  649678 command_runner.go:130] > # enable_tracing = false
	I1006 14:21:49.279100  649678 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1006 14:21:49.279118  649678 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1006 14:21:49.279133  649678 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1006 14:21:49.279145  649678 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1006 14:21:49.279155  649678 command_runner.go:130] > # CRI-O NRI configuration.
	I1006 14:21:49.279165  649678 command_runner.go:130] > [crio.nri]
	I1006 14:21:49.279176  649678 command_runner.go:130] > # Globally enable or disable NRI.
	I1006 14:21:49.279185  649678 command_runner.go:130] > # enable_nri = true
	I1006 14:21:49.279195  649678 command_runner.go:130] > # NRI socket to listen on.
	I1006 14:21:49.279220  649678 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1006 14:21:49.279232  649678 command_runner.go:130] > # NRI plugin directory to use.
	I1006 14:21:49.279239  649678 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1006 14:21:49.279251  649678 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1006 14:21:49.279263  649678 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1006 14:21:49.279276  649678 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1006 14:21:49.279348  649678 command_runner.go:130] > # nri_disable_connections = false
	I1006 14:21:49.279363  649678 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1006 14:21:49.279371  649678 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1006 14:21:49.279381  649678 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1006 14:21:49.279393  649678 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1006 14:21:49.279404  649678 command_runner.go:130] > # NRI default validator configuration.
	I1006 14:21:49.279420  649678 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1006 14:21:49.279434  649678 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1006 14:21:49.279445  649678 command_runner.go:130] > # can be restricted/rejected:
	I1006 14:21:49.279455  649678 command_runner.go:130] > # - OCI hook injection
	I1006 14:21:49.279467  649678 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1006 14:21:49.279479  649678 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1006 14:21:49.279488  649678 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1006 14:21:49.279499  649678 command_runner.go:130] > # - adjustment of linux namespaces
	I1006 14:21:49.279513  649678 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1006 14:21:49.279528  649678 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1006 14:21:49.279541  649678 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1006 14:21:49.279550  649678 command_runner.go:130] > #
	I1006 14:21:49.279561  649678 command_runner.go:130] > # [crio.nri.default_validator]
	I1006 14:21:49.279574  649678 command_runner.go:130] > # nri_enable_default_validator = false
	I1006 14:21:49.279587  649678 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1006 14:21:49.279600  649678 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1006 14:21:49.279613  649678 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1006 14:21:49.279626  649678 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1006 14:21:49.279636  649678 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1006 14:21:49.279646  649678 command_runner.go:130] > # nri_validator_required_plugins = [
	I1006 14:21:49.279656  649678 command_runner.go:130] > # ]
	I1006 14:21:49.279668  649678 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1006 14:21:49.279681  649678 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1006 14:21:49.279691  649678 command_runner.go:130] > [crio.stats]
	I1006 14:21:49.279704  649678 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1006 14:21:49.279717  649678 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1006 14:21:49.279728  649678 command_runner.go:130] > # stats_collection_period = 0
	I1006 14:21:49.279739  649678 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1006 14:21:49.279753  649678 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1006 14:21:49.279764  649678 command_runner.go:130] > # collection_period = 0
	I1006 14:21:49.279811  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258239123Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1006 14:21:49.279828  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258265766Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1006 14:21:49.279842  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258283938Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1006 14:21:49.279857  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.25830256Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1006 14:21:49.279875  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258357499Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:21:49.279892  649678 command_runner.go:130] ! time="2025-10-06T14:21:49.258517334Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1006 14:21:49.279912  649678 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1006 14:21:49.280045  649678 cni.go:84] Creating CNI manager for ""
	I1006 14:21:49.280059  649678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:21:49.280078  649678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:21:49.280122  649678 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-135520 NodeName:functional-135520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:21:49.280303  649678 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-135520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:21:49.280384  649678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:21:49.288800  649678 command_runner.go:130] > kubeadm
	I1006 14:21:49.288826  649678 command_runner.go:130] > kubectl
	I1006 14:21:49.288833  649678 command_runner.go:130] > kubelet
	I1006 14:21:49.288864  649678 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:21:49.288912  649678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:21:49.296476  649678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 14:21:49.308883  649678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:21:49.321172  649678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1006 14:21:49.333376  649678 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:21:49.336963  649678 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1006 14:21:49.337019  649678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:49.424422  649678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:21:49.437476  649678 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520 for IP: 192.168.49.2
	I1006 14:21:49.437505  649678 certs.go:195] generating shared ca certs ...
	I1006 14:21:49.437527  649678 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:49.437678  649678 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:21:49.437730  649678 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:21:49.437748  649678 certs.go:257] generating profile certs ...
	I1006 14:21:49.437847  649678 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key
	I1006 14:21:49.437896  649678 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key.72a46e8e
	I1006 14:21:49.437936  649678 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key
	I1006 14:21:49.437949  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:21:49.437963  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:21:49.437984  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:21:49.438003  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:21:49.438018  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:21:49.438035  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:21:49.438049  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:21:49.438064  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:21:49.438123  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:21:49.438160  649678 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:21:49.438171  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:21:49.438196  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:21:49.438246  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:21:49.438271  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:21:49.438316  649678 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:21:49.438344  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.438359  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.438381  649678 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.439032  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:21:49.456437  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:21:49.473578  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:21:49.490593  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:21:49.508347  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:21:49.525339  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:21:49.541997  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:21:49.558467  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:21:49.576359  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:21:49.593578  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:21:49.610863  649678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:21:49.628123  649678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:21:49.640270  649678 ssh_runner.go:195] Run: openssl version
	I1006 14:21:49.646279  649678 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1006 14:21:49.646391  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:21:49.654553  649678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.658110  649678 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.658254  649678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.658303  649678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:21:49.692318  649678 command_runner.go:130] > 3ec20f2e
	I1006 14:21:49.692406  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:21:49.700814  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:21:49.709140  649678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.712721  649678 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.712738  649678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.712772  649678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:49.745663  649678 command_runner.go:130] > b5213941
	I1006 14:21:49.745998  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:21:49.754083  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:21:49.762664  649678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.766415  649678 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.766461  649678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.766502  649678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:21:49.800644  649678 command_runner.go:130] > 51391683
	I1006 14:21:49.800985  649678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:21:49.809049  649678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:21:49.812721  649678 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:21:49.812776  649678 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1006 14:21:49.812784  649678 command_runner.go:130] > Device: 8,1	Inode: 580300      Links: 1
	I1006 14:21:49.812793  649678 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 14:21:49.812800  649678 command_runner.go:130] > Access: 2025-10-06 14:17:42.533320203 +0000
	I1006 14:21:49.812811  649678 command_runner.go:130] > Modify: 2025-10-06 14:13:37.457627952 +0000
	I1006 14:21:49.812819  649678 command_runner.go:130] > Change: 2025-10-06 14:13:37.457627952 +0000
	I1006 14:21:49.812829  649678 command_runner.go:130] >  Birth: 2025-10-06 14:13:37.457627952 +0000
	I1006 14:21:49.812886  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:21:49.846896  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.847277  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:21:49.881096  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.881431  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:21:49.916333  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.916837  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:21:49.951128  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.951323  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:21:49.984919  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:49.985255  649678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:21:50.018710  649678 command_runner.go:130] > Certificate will not expire
	I1006 14:21:50.018987  649678 kubeadm.go:400] StartCluster: {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:50.019061  649678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:21:50.019118  649678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:21:50.047552  649678 cri.go:89] found id: ""
	I1006 14:21:50.047624  649678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:21:50.055103  649678 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1006 14:21:50.055125  649678 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1006 14:21:50.055137  649678 command_runner.go:130] > /var/lib/minikube/etcd:
	I1006 14:21:50.055780  649678 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:21:50.055795  649678 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:21:50.055835  649678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:21:50.063106  649678 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:21:50.063218  649678 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-135520" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.063263  649678 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-626179/kubeconfig needs updating (will repair): [kubeconfig missing "functional-135520" cluster setting kubeconfig missing "functional-135520" context setting]
	I1006 14:21:50.063581  649678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:50.064282  649678 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.064435  649678 kapi.go:59] client config for functional-135520: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:21:50.064874  649678 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 14:21:50.064894  649678 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 14:21:50.064898  649678 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 14:21:50.064902  649678 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 14:21:50.064906  649678 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 14:21:50.064950  649678 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1006 14:21:50.065393  649678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:21:50.072886  649678 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1006 14:21:50.072922  649678 kubeadm.go:601] duration metric: took 17.120794ms to restartPrimaryControlPlane
	I1006 14:21:50.072932  649678 kubeadm.go:402] duration metric: took 53.951913ms to StartCluster
	I1006 14:21:50.072948  649678 settings.go:142] acquiring lock: {Name:mk49b10f71f24d1f54d5c453b3b04e717e9a9100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:50.073763  649678 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.074346  649678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:50.074579  649678 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:21:50.074661  649678 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 14:21:50.074799  649678 addons.go:69] Setting storage-provisioner=true in profile "functional-135520"
	I1006 14:21:50.074825  649678 addons.go:238] Setting addon storage-provisioner=true in "functional-135520"
	I1006 14:21:50.074761  649678 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:21:50.074866  649678 addons.go:69] Setting default-storageclass=true in profile "functional-135520"
	I1006 14:21:50.074859  649678 host.go:66] Checking if "functional-135520" exists ...
	I1006 14:21:50.074881  649678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-135520"
	I1006 14:21:50.075174  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:50.075488  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:50.077233  649678 out.go:179] * Verifying Kubernetes components...
	I1006 14:21:50.078370  649678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:50.095495  649678 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:21:50.095656  649678 kapi.go:59] client config for functional-135520: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:21:50.095938  649678 addons.go:238] Setting addon default-storageclass=true in "functional-135520"
	I1006 14:21:50.095974  649678 host.go:66] Checking if "functional-135520" exists ...
	I1006 14:21:50.096327  649678 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:21:50.100068  649678 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:21:50.101767  649678 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.101786  649678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:21:50.101831  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:50.122986  649678 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.123017  649678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:21:50.123083  649678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:21:50.128190  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:50.141305  649678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:21:50.171892  649678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:21:50.185683  649678 node_ready.go:35] waiting up to 6m0s for node "functional-135520" to be "Ready" ...
	I1006 14:21:50.185842  649678 type.go:168] "Request Body" body=""
	I1006 14:21:50.185906  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:50.186211  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:50.238569  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.250369  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.297302  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.297371  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.297421  649678 retry.go:31] will retry after 341.445316ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.306094  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.306137  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.306156  649678 retry.go:31] will retry after 289.440052ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.596773  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.639555  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.652478  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.652547  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.652572  649678 retry.go:31] will retry after 276.474886ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.686728  649678 type.go:168] "Request Body" body=""
	I1006 14:21:50.686820  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:50.687192  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:50.696244  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.696297  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.696320  649678 retry.go:31] will retry after 208.115159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.904724  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:50.929427  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:50.961651  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.961718  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.961741  649678 retry.go:31] will retry after 526.763649ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.984274  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:50.988765  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:50.988799  649678 retry.go:31] will retry after 299.40846ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.186119  649678 type.go:168] "Request Body" body=""
	I1006 14:21:51.186232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:51.186600  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:51.288897  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:51.344296  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:51.344362  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.344390  649678 retry.go:31] will retry after 1.255489073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.489635  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:51.542509  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:51.545518  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.545558  649678 retry.go:31] will retry after 1.109395122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.686960  649678 type.go:168] "Request Body" body=""
	I1006 14:21:51.687044  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:51.687429  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:52.186098  649678 type.go:168] "Request Body" body=""
	I1006 14:21:52.186177  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:52.186579  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:52.186647  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:52.600133  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:52.654438  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:52.654496  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.654515  649678 retry.go:31] will retry after 1.609702337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.655551  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:52.686897  649678 type.go:168] "Request Body" body=""
	I1006 14:21:52.686998  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:52.687382  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:52.709517  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:52.709578  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.709602  649678 retry.go:31] will retry after 1.712984533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:53.186162  649678 type.go:168] "Request Body" body=""
	I1006 14:21:53.186283  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:53.186685  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:53.686305  649678 type.go:168] "Request Body" body=""
	I1006 14:21:53.686410  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:53.686778  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:54.186389  649678 type.go:168] "Request Body" body=""
	I1006 14:21:54.186497  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:54.186895  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:54.186974  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:54.265161  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:54.320415  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:54.320465  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.320484  649678 retry.go:31] will retry after 1.901708606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.423753  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:54.478522  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:54.478584  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.478619  649678 retry.go:31] will retry after 1.584586857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:54.685879  649678 type.go:168] "Request Body" body=""
	I1006 14:21:54.685954  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:54.686309  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:55.185880  649678 type.go:168] "Request Body" body=""
	I1006 14:21:55.185961  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:55.186309  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:55.685969  649678 type.go:168] "Request Body" body=""
	I1006 14:21:55.686071  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:55.686478  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:56.063981  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:56.118717  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:56.118774  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.118807  649678 retry.go:31] will retry after 2.733091815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.185931  649678 type.go:168] "Request Body" body=""
	I1006 14:21:56.186008  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:56.186344  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:56.222525  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:56.276120  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:56.276196  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.276235  649678 retry.go:31] will retry after 1.816128137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.686920  649678 type.go:168] "Request Body" body=""
	I1006 14:21:56.687009  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:56.687408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:56.687471  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:57.186225  649678 type.go:168] "Request Body" body=""
	I1006 14:21:57.186314  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:57.186655  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:57.686516  649678 type.go:168] "Request Body" body=""
	I1006 14:21:57.686601  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:57.686915  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:58.093526  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:58.148989  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:58.149041  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.149066  649678 retry.go:31] will retry after 2.492749577s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.186253  649678 type.go:168] "Request Body" body=""
	I1006 14:21:58.186345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:58.186702  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:58.686540  649678 type.go:168] "Request Body" body=""
	I1006 14:21:58.686625  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:58.686963  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:21:58.852333  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:58.907770  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:21:58.907811  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.907831  649678 retry.go:31] will retry after 3.408188619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:59.186242  649678 type.go:168] "Request Body" body=""
	I1006 14:21:59.186325  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:59.186705  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:21:59.186784  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:21:59.686631  649678 type.go:168] "Request Body" body=""
	I1006 14:21:59.686729  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:21:59.687112  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:00.185903  649678 type.go:168] "Request Body" body=""
	I1006 14:22:00.185998  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:00.186365  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:00.642984  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:00.686799  649678 type.go:168] "Request Body" body=""
	I1006 14:22:00.686880  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:00.687243  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:00.698375  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:00.698427  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:00.698448  649678 retry.go:31] will retry after 6.594317937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:01.186036  649678 type.go:168] "Request Body" body=""
	I1006 14:22:01.186143  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:01.186563  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:01.686476  649678 type.go:168] "Request Body" body=""
	I1006 14:22:01.686584  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:01.686981  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:01.687058  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:02.186608  649678 type.go:168] "Request Body" body=""
	I1006 14:22:02.186705  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:02.187061  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:02.316279  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:02.370200  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:02.373358  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:02.373390  649678 retry.go:31] will retry after 5.569612861s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:02.686858  649678 type.go:168] "Request Body" body=""
	I1006 14:22:02.686947  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:02.687350  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:03.185954  649678 type.go:168] "Request Body" body=""
	I1006 14:22:03.186035  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:03.186451  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:03.686069  649678 type.go:168] "Request Body" body=""
	I1006 14:22:03.686185  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:03.686679  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:04.186146  649678 type.go:168] "Request Body" body=""
	I1006 14:22:04.186265  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:04.186682  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:04.186759  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:04.686312  649678 type.go:168] "Request Body" body=""
	I1006 14:22:04.686448  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:04.686778  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:05.186355  649678 type.go:168] "Request Body" body=""
	I1006 14:22:05.186442  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:05.186804  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:05.686470  649678 type.go:168] "Request Body" body=""
	I1006 14:22:05.686548  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:05.686892  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:06.186409  649678 type.go:168] "Request Body" body=""
	I1006 14:22:06.186493  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:06.186841  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:06.186906  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:06.686653  649678 type.go:168] "Request Body" body=""
	I1006 14:22:06.686731  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:06.687077  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:07.186430  649678 type.go:168] "Request Body" body=""
	I1006 14:22:07.186515  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:07.186850  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:07.293062  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:07.347879  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:07.347938  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:07.347958  649678 retry.go:31] will retry after 11.599769479s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:07.686422  649678 type.go:168] "Request Body" body=""
	I1006 14:22:07.686519  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:07.686919  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:07.943325  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:07.994639  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:07.997627  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:07.997659  649678 retry.go:31] will retry after 6.982471195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:08.186017  649678 type.go:168] "Request Body" body=""
	I1006 14:22:08.186095  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:08.186523  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:08.686113  649678 type.go:168] "Request Body" body=""
	I1006 14:22:08.686234  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:08.686617  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:08.686693  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:09.186236  649678 type.go:168] "Request Body" body=""
	I1006 14:22:09.186345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:09.186717  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:09.686283  649678 type.go:168] "Request Body" body=""
	I1006 14:22:09.686365  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:09.686759  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:10.186558  649678 type.go:168] "Request Body" body=""
	I1006 14:22:10.186657  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:10.187046  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:10.686665  649678 type.go:168] "Request Body" body=""
	I1006 14:22:10.686743  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:10.687116  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:10.687244  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:11.186799  649678 type.go:168] "Request Body" body=""
	I1006 14:22:11.186892  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:11.187296  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:11.686074  649678 type.go:168] "Request Body" body=""
	I1006 14:22:11.686224  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:11.686586  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:12.186151  649678 type.go:168] "Request Body" body=""
	I1006 14:22:12.186305  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:12.186696  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:12.686260  649678 type.go:168] "Request Body" body=""
	I1006 14:22:12.686345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:12.686706  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:13.186307  649678 type.go:168] "Request Body" body=""
	I1006 14:22:13.186418  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:13.186788  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:13.186857  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:13.686381  649678 type.go:168] "Request Body" body=""
	I1006 14:22:13.686488  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:13.686854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:14.186497  649678 type.go:168] "Request Body" body=""
	I1006 14:22:14.186592  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:14.186941  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:14.686598  649678 type.go:168] "Request Body" body=""
	I1006 14:22:14.686682  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:14.687029  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:14.980397  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:15.034191  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:15.034263  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:15.034288  649678 retry.go:31] will retry after 12.004605903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:15.186550  649678 type.go:168] "Request Body" body=""
	I1006 14:22:15.186633  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:15.187020  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:15.187102  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:15.686717  649678 type.go:168] "Request Body" body=""
	I1006 14:22:15.686812  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:15.687196  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:16.186809  649678 type.go:168] "Request Body" body=""
	I1006 14:22:16.186884  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:16.187256  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:16.686013  649678 type.go:168] "Request Body" body=""
	I1006 14:22:16.686098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:16.686488  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:17.186068  649678 type.go:168] "Request Body" body=""
	I1006 14:22:17.186146  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:17.186573  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:17.686133  649678 type.go:168] "Request Body" body=""
	I1006 14:22:17.686253  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:17.686622  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:17.686699  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:18.186192  649678 type.go:168] "Request Body" body=""
	I1006 14:22:18.186295  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:18.186693  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:18.686281  649678 type.go:168] "Request Body" body=""
	I1006 14:22:18.686358  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:18.686685  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:18.948057  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:19.002723  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:19.002770  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:19.002791  649678 retry.go:31] will retry after 9.663618433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:19.186105  649678 type.go:168] "Request Body" body=""
	I1006 14:22:19.186250  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:19.186659  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:19.686518  649678 type.go:168] "Request Body" body=""
	I1006 14:22:19.686605  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:19.686939  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:19.687009  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:20.186860  649678 type.go:168] "Request Body" body=""
	I1006 14:22:20.186965  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:20.187367  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:20.686167  649678 type.go:168] "Request Body" body=""
	I1006 14:22:20.686275  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:20.686635  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:21.186460  649678 type.go:168] "Request Body" body=""
	I1006 14:22:21.186548  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:21.186942  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:21.686821  649678 type.go:168] "Request Body" body=""
	I1006 14:22:21.686902  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:21.687332  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:21.687397  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:22.186083  649678 type.go:168] "Request Body" body=""
	I1006 14:22:22.186166  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:22.186569  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:22.686397  649678 type.go:168] "Request Body" body=""
	I1006 14:22:22.686491  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:22.686903  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:23.186781  649678 type.go:168] "Request Body" body=""
	I1006 14:22:23.186870  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:23.187268  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:23.686042  649678 type.go:168] "Request Body" body=""
	I1006 14:22:23.686129  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:23.686575  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:24.186356  649678 type.go:168] "Request Body" body=""
	I1006 14:22:24.186489  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:24.186921  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:24.187013  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:24.686802  649678 type.go:168] "Request Body" body=""
	I1006 14:22:24.686904  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:24.687313  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:25.186100  649678 type.go:168] "Request Body" body=""
	I1006 14:22:25.186254  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:25.186644  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:25.686394  649678 type.go:168] "Request Body" body=""
	I1006 14:22:25.686478  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:25.686854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:26.186709  649678 type.go:168] "Request Body" body=""
	I1006 14:22:26.186843  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:26.187291  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:26.187357  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:26.686108  649678 type.go:168] "Request Body" body=""
	I1006 14:22:26.686232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:26.686608  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:27.039059  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:27.094007  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:27.097496  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:27.097534  649678 retry.go:31] will retry after 22.614868096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:27.186847  649678 type.go:168] "Request Body" body=""
	I1006 14:22:27.186925  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:27.187319  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:27.686152  649678 type.go:168] "Request Body" body=""
	I1006 14:22:27.686302  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:27.686651  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:28.186562  649678 type.go:168] "Request Body" body=""
	I1006 14:22:28.186655  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:28.187109  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:28.666677  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:28.686315  649678 type.go:168] "Request Body" body=""
	I1006 14:22:28.686424  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:28.686765  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:28.686846  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:28.722750  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:28.722794  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:28.722814  649678 retry.go:31] will retry after 11.553901016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:29.186360  649678 type.go:168] "Request Body" body=""
	I1006 14:22:29.186463  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:29.186854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:29.686594  649678 type.go:168] "Request Body" body=""
	I1006 14:22:29.686674  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:29.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:30.186847  649678 type.go:168] "Request Body" body=""
	I1006 14:22:30.186978  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:30.187394  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:30.685980  649678 type.go:168] "Request Body" body=""
	I1006 14:22:30.686063  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:30.686514  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:31.186103  649678 type.go:168] "Request Body" body=""
	I1006 14:22:31.186273  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:31.186671  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:31.186735  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:31.686585  649678 type.go:168] "Request Body" body=""
	I1006 14:22:31.686699  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:31.687091  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:32.186757  649678 type.go:168] "Request Body" body=""
	I1006 14:22:32.186864  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:32.187311  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:32.685887  649678 type.go:168] "Request Body" body=""
	I1006 14:22:32.685973  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:32.686388  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:33.186057  649678 type.go:168] "Request Body" body=""
	I1006 14:22:33.186156  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:33.186557  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:33.686144  649678 type.go:168] "Request Body" body=""
	I1006 14:22:33.686262  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:33.686648  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:33.686721  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:34.186259  649678 type.go:168] "Request Body" body=""
	I1006 14:22:34.186354  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:34.186737  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:34.686419  649678 type.go:168] "Request Body" body=""
	I1006 14:22:34.686498  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:34.686871  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:35.186497  649678 type.go:168] "Request Body" body=""
	I1006 14:22:35.186603  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:35.186980  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:35.686662  649678 type.go:168] "Request Body" body=""
	I1006 14:22:35.686763  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:35.687122  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:35.687197  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:36.186754  649678 type.go:168] "Request Body" body=""
	I1006 14:22:36.186848  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:36.187316  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:36.686164  649678 type.go:168] "Request Body" body=""
	I1006 14:22:36.686314  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:36.686722  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:37.186321  649678 type.go:168] "Request Body" body=""
	I1006 14:22:37.186420  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:37.186775  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:37.686633  649678 type.go:168] "Request Body" body=""
	I1006 14:22:37.686715  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:37.687101  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:38.185900  649678 type.go:168] "Request Body" body=""
	I1006 14:22:38.185994  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:38.186391  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:38.186465  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:38.686198  649678 type.go:168] "Request Body" body=""
	I1006 14:22:38.686309  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:38.686708  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:39.186526  649678 type.go:168] "Request Body" body=""
	I1006 14:22:39.186655  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:39.187049  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:39.685917  649678 type.go:168] "Request Body" body=""
	I1006 14:22:39.686005  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:39.686446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:40.186230  649678 type.go:168] "Request Body" body=""
	I1006 14:22:40.186337  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:40.186733  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:40.186801  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:40.276916  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:22:40.331801  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:40.335179  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:40.335232  649678 retry.go:31] will retry after 39.41387573s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:40.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:22:40.686899  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:40.687303  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:41.186091  649678 type.go:168] "Request Body" body=""
	I1006 14:22:41.186200  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:41.186603  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:41.686526  649678 type.go:168] "Request Body" body=""
	I1006 14:22:41.686626  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:41.687010  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:42.186887  649678 type.go:168] "Request Body" body=""
	I1006 14:22:42.186964  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:42.187345  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:42.187421  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:42.686150  649678 type.go:168] "Request Body" body=""
	I1006 14:22:42.686267  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:42.686658  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:43.186527  649678 type.go:168] "Request Body" body=""
	I1006 14:22:43.186614  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:43.186999  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:43.686820  649678 type.go:168] "Request Body" body=""
	I1006 14:22:43.686909  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:43.687318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:44.186096  649678 type.go:168] "Request Body" body=""
	I1006 14:22:44.186247  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:44.186640  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:44.686530  649678 type.go:168] "Request Body" body=""
	I1006 14:22:44.686615  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:44.687010  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:44.687087  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:45.186889  649678 type.go:168] "Request Body" body=""
	I1006 14:22:45.186975  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:45.187340  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:45.686094  649678 type.go:168] "Request Body" body=""
	I1006 14:22:45.686177  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:45.686579  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:46.186357  649678 type.go:168] "Request Body" body=""
	I1006 14:22:46.186468  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:46.186826  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:46.686734  649678 type.go:168] "Request Body" body=""
	I1006 14:22:46.686824  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:46.687252  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:46.687331  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:47.186069  649678 type.go:168] "Request Body" body=""
	I1006 14:22:47.186155  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:47.186586  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:47.686023  649678 type.go:168] "Request Body" body=""
	I1006 14:22:47.686126  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:47.686582  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:48.186406  649678 type.go:168] "Request Body" body=""
	I1006 14:22:48.186501  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:48.186908  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:48.686766  649678 type.go:168] "Request Body" body=""
	I1006 14:22:48.686850  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:48.687229  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:49.186033  649678 type.go:168] "Request Body" body=""
	I1006 14:22:49.186123  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:49.186550  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:49.186623  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:49.686385  649678 type.go:168] "Request Body" body=""
	I1006 14:22:49.686504  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:49.686900  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:49.713160  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:22:49.766183  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:22:49.769572  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:49.769611  649678 retry.go:31] will retry after 48.442133458s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:50.186500  649678 type.go:168] "Request Body" body=""
	I1006 14:22:50.186594  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:50.186974  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:50.686617  649678 type.go:168] "Request Body" body=""
	I1006 14:22:50.686714  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:50.687119  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:51.186841  649678 type.go:168] "Request Body" body=""
	I1006 14:22:51.186935  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:51.187337  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:51.187405  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:51.686028  649678 type.go:168] "Request Body" body=""
	I1006 14:22:51.686127  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:51.686519  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:52.186126  649678 type.go:168] "Request Body" body=""
	I1006 14:22:52.186243  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:52.186633  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:52.686285  649678 type.go:168] "Request Body" body=""
	I1006 14:22:52.686514  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:52.686906  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:53.186666  649678 type.go:168] "Request Body" body=""
	I1006 14:22:53.186777  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:53.187137  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:53.686806  649678 type.go:168] "Request Body" body=""
	I1006 14:22:53.686890  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:53.687265  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:53.687341  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:54.186883  649678 type.go:168] "Request Body" body=""
	I1006 14:22:54.186965  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:54.187357  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:54.685948  649678 type.go:168] "Request Body" body=""
	I1006 14:22:54.686032  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:54.686415  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:55.186002  649678 type.go:168] "Request Body" body=""
	I1006 14:22:55.186183  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:55.186574  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:55.686131  649678 type.go:168] "Request Body" body=""
	I1006 14:22:55.686232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:55.686601  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:56.186156  649678 type.go:168] "Request Body" body=""
	I1006 14:22:56.186256  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:56.186593  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:56.186664  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:56.686450  649678 type.go:168] "Request Body" body=""
	I1006 14:22:56.686613  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:56.686999  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:57.186661  649678 type.go:168] "Request Body" body=""
	I1006 14:22:57.186772  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:57.187148  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:57.686783  649678 type.go:168] "Request Body" body=""
	I1006 14:22:57.686883  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:57.687277  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:58.185869  649678 type.go:168] "Request Body" body=""
	I1006 14:22:58.185950  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:58.186323  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:58.686036  649678 type.go:168] "Request Body" body=""
	I1006 14:22:58.686125  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:58.686521  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:22:58.686591  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:22:59.186311  649678 type.go:168] "Request Body" body=""
	I1006 14:22:59.186404  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:59.186765  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:22:59.686602  649678 type.go:168] "Request Body" body=""
	I1006 14:22:59.686706  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:22:59.687089  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:00.186937  649678 type.go:168] "Request Body" body=""
	I1006 14:23:00.187019  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:00.187408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:00.686157  649678 type.go:168] "Request Body" body=""
	I1006 14:23:00.686313  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:00.686729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:00.686803  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:01.186684  649678 type.go:168] "Request Body" body=""
	I1006 14:23:01.186778  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:01.187151  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:01.685976  649678 type.go:168] "Request Body" body=""
	I1006 14:23:01.686057  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:01.686478  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:02.186289  649678 type.go:168] "Request Body" body=""
	I1006 14:23:02.186377  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:02.186737  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:02.686603  649678 type.go:168] "Request Body" body=""
	I1006 14:23:02.686684  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:02.687079  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:02.687190  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:03.185994  649678 type.go:168] "Request Body" body=""
	I1006 14:23:03.186088  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:03.186549  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:03.686032  649678 type.go:168] "Request Body" body=""
	I1006 14:23:03.686132  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:03.686549  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:04.186500  649678 type.go:168] "Request Body" body=""
	I1006 14:23:04.186631  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:04.187174  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:04.685994  649678 type.go:168] "Request Body" body=""
	I1006 14:23:04.686082  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:04.686484  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:05.186312  649678 type.go:168] "Request Body" body=""
	I1006 14:23:05.186407  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:05.186774  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:05.186835  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:05.686657  649678 type.go:168] "Request Body" body=""
	I1006 14:23:05.686791  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:05.687181  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:06.186003  649678 type.go:168] "Request Body" body=""
	I1006 14:23:06.186097  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:06.186563  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:06.686413  649678 type.go:168] "Request Body" body=""
	I1006 14:23:06.686495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:06.686915  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:07.186819  649678 type.go:168] "Request Body" body=""
	I1006 14:23:07.186902  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:07.187335  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:07.187443  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:07.685990  649678 type.go:168] "Request Body" body=""
	I1006 14:23:07.686084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:07.686517  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:08.186341  649678 type.go:168] "Request Body" body=""
	I1006 14:23:08.186420  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:08.186803  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:08.686701  649678 type.go:168] "Request Body" body=""
	I1006 14:23:08.686838  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:08.687297  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:09.186086  649678 type.go:168] "Request Body" body=""
	I1006 14:23:09.186254  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:09.186682  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:09.686607  649678 type.go:168] "Request Body" body=""
	I1006 14:23:09.686743  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:09.687165  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:09.687290  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:10.185924  649678 type.go:168] "Request Body" body=""
	I1006 14:23:10.186016  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:10.186459  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:10.686243  649678 type.go:168] "Request Body" body=""
	I1006 14:23:10.686352  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:10.686735  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:11.186644  649678 type.go:168] "Request Body" body=""
	I1006 14:23:11.186726  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:11.187073  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:11.685855  649678 type.go:168] "Request Body" body=""
	I1006 14:23:11.685945  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:11.686393  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:12.186196  649678 type.go:168] "Request Body" body=""
	I1006 14:23:12.186326  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:12.186700  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:12.186777  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:12.686603  649678 type.go:168] "Request Body" body=""
	I1006 14:23:12.686687  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:12.687185  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:13.186005  649678 type.go:168] "Request Body" body=""
	I1006 14:23:13.186125  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:13.186566  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:13.686384  649678 type.go:168] "Request Body" body=""
	I1006 14:23:13.686489  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:13.686889  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:14.186755  649678 type.go:168] "Request Body" body=""
	I1006 14:23:14.186840  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:14.187235  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:14.187324  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:14.686081  649678 type.go:168] "Request Body" body=""
	I1006 14:23:14.686227  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:14.686581  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:15.186411  649678 type.go:168] "Request Body" body=""
	I1006 14:23:15.186495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:15.186873  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:15.686769  649678 type.go:168] "Request Body" body=""
	I1006 14:23:15.686854  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:15.687247  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:16.186139  649678 type.go:168] "Request Body" body=""
	I1006 14:23:16.186276  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:16.186637  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:16.686871  649678 type.go:168] "Request Body" body=""
	I1006 14:23:16.686955  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:16.687341  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:16.687407  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:17.186133  649678 type.go:168] "Request Body" body=""
	I1006 14:23:17.186292  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:17.186672  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:17.686604  649678 type.go:168] "Request Body" body=""
	I1006 14:23:17.686688  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:17.687115  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:18.185964  649678 type.go:168] "Request Body" body=""
	I1006 14:23:18.186060  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:18.186514  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:18.686315  649678 type.go:168] "Request Body" body=""
	I1006 14:23:18.686410  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:18.686801  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:19.186674  649678 type.go:168] "Request Body" body=""
	I1006 14:23:19.186783  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:19.187188  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:19.187288  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:19.686017  649678 type.go:168] "Request Body" body=""
	I1006 14:23:19.686099  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:19.686535  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:19.749802  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:23:19.804037  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:19.807440  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:19.807591  649678 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:23:20.186477  649678 type.go:168] "Request Body" body=""
	I1006 14:23:20.186600  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:20.186989  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:20.686678  649678 type.go:168] "Request Body" body=""
	I1006 14:23:20.686763  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:20.687137  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:21.186775  649678 type.go:168] "Request Body" body=""
	I1006 14:23:21.186859  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:21.187276  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:21.187355  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:21.686079  649678 type.go:168] "Request Body" body=""
	I1006 14:23:21.686193  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:21.686605  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:22.186165  649678 type.go:168] "Request Body" body=""
	I1006 14:23:22.186276  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:22.186620  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:22.686240  649678 type.go:168] "Request Body" body=""
	I1006 14:23:22.686350  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:22.686763  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:23.186337  649678 type.go:168] "Request Body" body=""
	I1006 14:23:23.186473  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:23.186847  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:23.686573  649678 type.go:168] "Request Body" body=""
	I1006 14:23:23.686658  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:23.687072  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:23.687135  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:24.186778  649678 type.go:168] "Request Body" body=""
	I1006 14:23:24.186877  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:24.187302  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:24.685913  649678 type.go:168] "Request Body" body=""
	I1006 14:23:24.686009  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:24.686431  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:25.186039  649678 type.go:168] "Request Body" body=""
	I1006 14:23:25.186195  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:25.186614  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:25.686319  649678 type.go:168] "Request Body" body=""
	I1006 14:23:25.686432  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:25.686796  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:26.186364  649678 type.go:168] "Request Body" body=""
	I1006 14:23:26.186458  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:26.186842  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:26.186906  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:26.686757  649678 type.go:168] "Request Body" body=""
	I1006 14:23:26.686843  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:26.687175  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:27.186886  649678 type.go:168] "Request Body" body=""
	I1006 14:23:27.187004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:27.187400  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:27.685970  649678 type.go:168] "Request Body" body=""
	I1006 14:23:27.686086  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:27.686508  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:28.186097  649678 type.go:168] "Request Body" body=""
	I1006 14:23:28.186253  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:28.186667  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:28.686303  649678 type.go:168] "Request Body" body=""
	I1006 14:23:28.686394  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:28.686776  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:28.686869  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:29.186361  649678 type.go:168] "Request Body" body=""
	I1006 14:23:29.186534  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:29.186921  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:29.686617  649678 type.go:168] "Request Body" body=""
	I1006 14:23:29.686706  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:29.687093  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:30.185988  649678 type.go:168] "Request Body" body=""
	I1006 14:23:30.186107  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:30.186525  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:30.686173  649678 type.go:168] "Request Body" body=""
	I1006 14:23:30.686284  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:30.686704  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:31.186306  649678 type.go:168] "Request Body" body=""
	I1006 14:23:31.186416  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:31.186796  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:31.186865  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:31.686731  649678 type.go:168] "Request Body" body=""
	I1006 14:23:31.686818  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:31.687245  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:32.185868  649678 type.go:168] "Request Body" body=""
	I1006 14:23:32.185977  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:32.186468  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:32.686055  649678 type.go:168] "Request Body" body=""
	I1006 14:23:32.686249  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:32.686637  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:33.186245  649678 type.go:168] "Request Body" body=""
	I1006 14:23:33.186380  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:33.186741  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:33.686327  649678 type.go:168] "Request Body" body=""
	I1006 14:23:33.686421  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:33.686817  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:33.686882  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:34.186428  649678 type.go:168] "Request Body" body=""
	I1006 14:23:34.186519  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:34.186900  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:34.686601  649678 type.go:168] "Request Body" body=""
	I1006 14:23:34.686693  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:34.687174  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:35.186394  649678 type.go:168] "Request Body" body=""
	I1006 14:23:35.186495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:35.186830  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:35.686544  649678 type.go:168] "Request Body" body=""
	I1006 14:23:35.686676  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:35.687151  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:35.687249  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:36.186429  649678 type.go:168] "Request Body" body=""
	I1006 14:23:36.186525  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:36.186900  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:36.686821  649678 type.go:168] "Request Body" body=""
	I1006 14:23:36.686905  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:36.687296  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:37.185937  649678 type.go:168] "Request Body" body=""
	I1006 14:23:37.186041  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:37.186463  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:37.686057  649678 type.go:168] "Request Body" body=""
	I1006 14:23:37.686134  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:37.686537  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:38.186164  649678 type.go:168] "Request Body" body=""
	I1006 14:23:38.186301  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:38.186719  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:38.186784  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:38.212898  649678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:23:38.268129  649678 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:38.271217  649678 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:38.271448  649678 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:23:38.274179  649678 out.go:179] * Enabled addons: 
	I1006 14:23:38.275265  649678 addons.go:514] duration metric: took 1m48.200610857s for enable addons: enabled=[]
	I1006 14:23:38.686820  649678 type.go:168] "Request Body" body=""
	I1006 14:23:38.686904  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:38.687336  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:39.186242  649678 type.go:168] "Request Body" body=""
	I1006 14:23:39.186340  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:39.186728  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:39.686616  649678 type.go:168] "Request Body" body=""
	I1006 14:23:39.686713  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:39.687110  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:40.185923  649678 type.go:168] "Request Body" body=""
	I1006 14:23:40.186012  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:40.186440  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:40.686260  649678 type.go:168] "Request Body" body=""
	I1006 14:23:40.686360  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:40.686781  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:40.686870  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:41.186716  649678 type.go:168] "Request Body" body=""
	I1006 14:23:41.186846  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:41.187307  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:41.686117  649678 type.go:168] "Request Body" body=""
	I1006 14:23:41.686264  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:41.686651  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:42.186500  649678 type.go:168] "Request Body" body=""
	I1006 14:23:42.186601  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:42.187000  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:42.686853  649678 type.go:168] "Request Body" body=""
	I1006 14:23:42.686932  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:42.687293  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:42.687369  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:43.186081  649678 type.go:168] "Request Body" body=""
	I1006 14:23:43.186176  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:43.186615  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:43.686377  649678 type.go:168] "Request Body" body=""
	I1006 14:23:43.686461  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:43.686807  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:44.186682  649678 type.go:168] "Request Body" body=""
	I1006 14:23:44.186789  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:44.187155  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:44.685945  649678 type.go:168] "Request Body" body=""
	I1006 14:23:44.686029  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:44.686444  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:45.186221  649678 type.go:168] "Request Body" body=""
	I1006 14:23:45.186326  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:45.186717  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:45.186786  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:45.686681  649678 type.go:168] "Request Body" body=""
	I1006 14:23:45.686763  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:45.687135  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:46.185919  649678 type.go:168] "Request Body" body=""
	I1006 14:23:46.186010  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:46.186423  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:46.686119  649678 type.go:168] "Request Body" body=""
	I1006 14:23:46.686200  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:46.686594  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:47.186343  649678 type.go:168] "Request Body" body=""
	I1006 14:23:47.186428  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:47.186751  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:47.186812  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:47.686582  649678 type.go:168] "Request Body" body=""
	I1006 14:23:47.686670  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:47.687029  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:48.186905  649678 type.go:168] "Request Body" body=""
	I1006 14:23:48.187010  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:48.187415  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:48.686173  649678 type.go:168] "Request Body" body=""
	I1006 14:23:48.686274  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:48.686614  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:49.186426  649678 type.go:168] "Request Body" body=""
	I1006 14:23:49.186559  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:49.187170  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:49.187283  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:49.686055  649678 type.go:168] "Request Body" body=""
	I1006 14:23:49.686162  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:49.686567  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:50.186460  649678 type.go:168] "Request Body" body=""
	I1006 14:23:50.186578  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:50.186980  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:50.686657  649678 type.go:168] "Request Body" body=""
	I1006 14:23:50.686737  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:50.687102  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:51.186780  649678 type.go:168] "Request Body" body=""
	I1006 14:23:51.186879  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:51.187290  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:51.187357  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:51.686055  649678 type.go:168] "Request Body" body=""
	I1006 14:23:51.686146  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:51.686562  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:52.186152  649678 type.go:168] "Request Body" body=""
	I1006 14:23:52.186274  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:52.186663  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:52.686295  649678 type.go:168] "Request Body" body=""
	I1006 14:23:52.686384  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:52.686751  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:53.186373  649678 type.go:168] "Request Body" body=""
	I1006 14:23:53.186476  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:53.186876  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:53.686514  649678 type.go:168] "Request Body" body=""
	I1006 14:23:53.686604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:53.686953  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:53.687018  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:54.186624  649678 type.go:168] "Request Body" body=""
	I1006 14:23:54.186724  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:54.187084  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:54.686709  649678 type.go:168] "Request Body" body=""
	I1006 14:23:54.686798  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:54.687167  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:55.186814  649678 type.go:168] "Request Body" body=""
	I1006 14:23:55.186908  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:55.187298  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:55.685884  649678 type.go:168] "Request Body" body=""
	I1006 14:23:55.685966  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:55.686336  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:56.185959  649678 type.go:168] "Request Body" body=""
	I1006 14:23:56.186053  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:56.186474  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:56.186543  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:56.686257  649678 type.go:168] "Request Body" body=""
	I1006 14:23:56.686340  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:56.686714  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:57.186250  649678 type.go:168] "Request Body" body=""
	I1006 14:23:57.186346  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:57.186713  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:57.686338  649678 type.go:168] "Request Body" body=""
	I1006 14:23:57.686411  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:57.686758  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:58.186346  649678 type.go:168] "Request Body" body=""
	I1006 14:23:58.186462  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:58.186853  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:23:58.186925  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:23:58.686513  649678 type.go:168] "Request Body" body=""
	I1006 14:23:58.686597  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:58.686941  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:59.186651  649678 type.go:168] "Request Body" body=""
	I1006 14:23:59.186746  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:59.187144  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:23:59.686847  649678 type.go:168] "Request Body" body=""
	I1006 14:23:59.686928  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:23:59.687299  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:00.186276  649678 type.go:168] "Request Body" body=""
	I1006 14:24:00.186374  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:00.186780  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:00.686385  649678 type.go:168] "Request Body" body=""
	I1006 14:24:00.686467  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:00.686835  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:00.686902  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:01.186504  649678 type.go:168] "Request Body" body=""
	I1006 14:24:01.186604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:01.187011  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:01.686898  649678 type.go:168] "Request Body" body=""
	I1006 14:24:01.686984  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:01.687358  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:02.185992  649678 type.go:168] "Request Body" body=""
	I1006 14:24:02.186098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:02.186510  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:02.686060  649678 type.go:168] "Request Body" body=""
	I1006 14:24:02.686175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:02.686581  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:03.186144  649678 type.go:168] "Request Body" body=""
	I1006 14:24:03.186269  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:03.186664  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:03.186735  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:03.686298  649678 type.go:168] "Request Body" body=""
	I1006 14:24:03.686391  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:03.686764  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:04.186331  649678 type.go:168] "Request Body" body=""
	I1006 14:24:04.186436  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:04.186806  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:04.686453  649678 type.go:168] "Request Body" body=""
	I1006 14:24:04.686539  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:04.686904  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:05.186584  649678 type.go:168] "Request Body" body=""
	I1006 14:24:05.186677  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:05.187042  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:05.187118  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:05.686754  649678 type.go:168] "Request Body" body=""
	I1006 14:24:05.686838  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:05.687249  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:06.186882  649678 type.go:168] "Request Body" body=""
	I1006 14:24:06.186978  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:06.187376  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:06.686265  649678 type.go:168] "Request Body" body=""
	I1006 14:24:06.686360  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:06.686739  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:07.186388  649678 type.go:168] "Request Body" body=""
	I1006 14:24:07.186485  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:07.186890  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:07.686565  649678 type.go:168] "Request Body" body=""
	I1006 14:24:07.686740  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:07.687177  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:07.687301  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:08.186834  649678 type.go:168] "Request Body" body=""
	I1006 14:24:08.186933  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:08.187338  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:08.685923  649678 type.go:168] "Request Body" body=""
	I1006 14:24:08.686004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:08.686400  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:09.185982  649678 type.go:168] "Request Body" body=""
	I1006 14:24:09.186075  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:09.186486  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:09.686071  649678 type.go:168] "Request Body" body=""
	I1006 14:24:09.686147  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:09.686609  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:10.186339  649678 type.go:168] "Request Body" body=""
	I1006 14:24:10.186435  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:10.186832  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:10.186914  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:10.686410  649678 type.go:168] "Request Body" body=""
	I1006 14:24:10.686491  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:10.686878  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:11.186499  649678 type.go:168] "Request Body" body=""
	I1006 14:24:11.186603  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:11.186987  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:11.686993  649678 type.go:168] "Request Body" body=""
	I1006 14:24:11.687075  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:11.687486  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:12.186044  649678 type.go:168] "Request Body" body=""
	I1006 14:24:12.186144  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:12.186531  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:12.686100  649678 type.go:168] "Request Body" body=""
	I1006 14:24:12.686192  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:12.686612  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:12.686688  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:13.186239  649678 type.go:168] "Request Body" body=""
	I1006 14:24:13.186332  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:13.186729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:13.686339  649678 type.go:168] "Request Body" body=""
	I1006 14:24:13.686426  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:13.686827  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:14.186505  649678 type.go:168] "Request Body" body=""
	I1006 14:24:14.186600  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:14.186972  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:14.686706  649678 type.go:168] "Request Body" body=""
	I1006 14:24:14.686793  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:14.687271  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:14.687344  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:15.186857  649678 type.go:168] "Request Body" body=""
	I1006 14:24:15.186949  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:15.187318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:15.685900  649678 type.go:168] "Request Body" body=""
	I1006 14:24:15.686005  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:15.686504  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:16.186073  649678 type.go:168] "Request Body" body=""
	I1006 14:24:16.186167  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:16.186557  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:16.686576  649678 type.go:168] "Request Body" body=""
	I1006 14:24:16.686657  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:16.687039  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:17.186833  649678 type.go:168] "Request Body" body=""
	I1006 14:24:17.186929  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:17.187333  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:17.187429  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:17.685958  649678 type.go:168] "Request Body" body=""
	I1006 14:24:17.686060  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:17.686506  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:18.186267  649678 type.go:168] "Request Body" body=""
	I1006 14:24:18.186350  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:18.186723  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:18.686325  649678 type.go:168] "Request Body" body=""
	I1006 14:24:18.686420  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:18.686789  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:19.186406  649678 type.go:168] "Request Body" body=""
	I1006 14:24:19.186488  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:19.186868  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:19.686567  649678 type.go:168] "Request Body" body=""
	I1006 14:24:19.686656  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:19.687081  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:19.687166  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:20.185993  649678 type.go:168] "Request Body" body=""
	I1006 14:24:20.186081  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:20.186515  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:20.686127  649678 type.go:168] "Request Body" body=""
	I1006 14:24:20.686261  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:20.686672  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:21.186285  649678 type.go:168] "Request Body" body=""
	I1006 14:24:21.186378  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:21.186760  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:21.686689  649678 type.go:168] "Request Body" body=""
	I1006 14:24:21.686806  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:21.687270  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:21.687343  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:22.186875  649678 type.go:168] "Request Body" body=""
	I1006 14:24:22.186957  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:22.187340  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:22.685917  649678 type.go:168] "Request Body" body=""
	I1006 14:24:22.686001  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:22.686421  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:23.186006  649678 type.go:168] "Request Body" body=""
	I1006 14:24:23.186098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:23.186524  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:23.686088  649678 type.go:168] "Request Body" body=""
	I1006 14:24:23.686169  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:23.686561  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:24.186157  649678 type.go:168] "Request Body" body=""
	I1006 14:24:24.186277  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:24.186678  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:24.186752  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:24.686265  649678 type.go:168] "Request Body" body=""
	I1006 14:24:24.686340  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:24.686724  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:25.186308  649678 type.go:168] "Request Body" body=""
	I1006 14:24:25.186403  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:25.186836  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:25.686416  649678 type.go:168] "Request Body" body=""
	I1006 14:24:25.686502  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:25.686869  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:26.186513  649678 type.go:168] "Request Body" body=""
	I1006 14:24:26.186607  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:26.186966  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:26.187036  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:26.686743  649678 type.go:168] "Request Body" body=""
	I1006 14:24:26.686828  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:26.687232  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:27.186830  649678 type.go:168] "Request Body" body=""
	I1006 14:24:27.186956  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:27.187284  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:27.685892  649678 type.go:168] "Request Body" body=""
	I1006 14:24:27.685969  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:27.686452  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:28.185994  649678 type.go:168] "Request Body" body=""
	I1006 14:24:28.186085  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:28.186516  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:28.686092  649678 type.go:168] "Request Body" body=""
	I1006 14:24:28.686226  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:28.686600  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:28.686667  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:29.186232  649678 type.go:168] "Request Body" body=""
	I1006 14:24:29.186318  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:29.186686  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:29.686272  649678 type.go:168] "Request Body" body=""
	I1006 14:24:29.686385  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:29.686803  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:30.186682  649678 type.go:168] "Request Body" body=""
	I1006 14:24:30.186770  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:30.187128  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:30.686899  649678 type.go:168] "Request Body" body=""
	I1006 14:24:30.687000  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:30.687446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:30.687521  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:31.186005  649678 type.go:168] "Request Body" body=""
	I1006 14:24:31.186092  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:31.186508  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:31.686473  649678 type.go:168] "Request Body" body=""
	I1006 14:24:31.686563  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:31.686985  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:32.186673  649678 type.go:168] "Request Body" body=""
	I1006 14:24:32.186756  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:32.187112  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:32.686831  649678 type.go:168] "Request Body" body=""
	I1006 14:24:32.686918  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:32.687304  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:33.185919  649678 type.go:168] "Request Body" body=""
	I1006 14:24:33.186004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:33.186403  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:33.186477  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:33.685961  649678 type.go:168] "Request Body" body=""
	I1006 14:24:33.686072  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:33.686452  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:34.186033  649678 type.go:168] "Request Body" body=""
	I1006 14:24:34.186116  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:34.186521  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:34.686098  649678 type.go:168] "Request Body" body=""
	I1006 14:24:34.686233  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:34.686619  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:35.186193  649678 type.go:168] "Request Body" body=""
	I1006 14:24:35.186290  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:35.186663  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:35.186737  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:35.686293  649678 type.go:168] "Request Body" body=""
	I1006 14:24:35.686406  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:35.686798  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:36.186344  649678 type.go:168] "Request Body" body=""
	I1006 14:24:36.186419  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:36.186746  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:36.686564  649678 type.go:168] "Request Body" body=""
	I1006 14:24:36.686654  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:36.687044  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:37.186671  649678 type.go:168] "Request Body" body=""
	I1006 14:24:37.186749  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:37.187101  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:37.187190  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:37.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:24:37.686844  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:37.687282  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:38.186015  649678 type.go:168] "Request Body" body=""
	I1006 14:24:38.186100  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:38.186512  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:38.686083  649678 type.go:168] "Request Body" body=""
	I1006 14:24:38.686160  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:38.686534  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:39.186147  649678 type.go:168] "Request Body" body=""
	I1006 14:24:39.186264  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:39.186629  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:39.686351  649678 type.go:168] "Request Body" body=""
	I1006 14:24:39.686445  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:39.686834  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:39.686903  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:40.186723  649678 type.go:168] "Request Body" body=""
	I1006 14:24:40.186824  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:40.187257  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:40.686897  649678 type.go:168] "Request Body" body=""
	I1006 14:24:40.686983  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:40.687415  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:41.186000  649678 type.go:168] "Request Body" body=""
	I1006 14:24:41.186080  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:41.186497  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:41.686311  649678 type.go:168] "Request Body" body=""
	I1006 14:24:41.686398  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:41.686747  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:42.186394  649678 type.go:168] "Request Body" body=""
	I1006 14:24:42.186477  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:42.186829  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:42.186909  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:42.686365  649678 type.go:168] "Request Body" body=""
	I1006 14:24:42.686458  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:42.686828  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:43.186364  649678 type.go:168] "Request Body" body=""
	I1006 14:24:43.186453  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:43.186835  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:43.686404  649678 type.go:168] "Request Body" body=""
	I1006 14:24:43.686479  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:43.686829  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:44.186419  649678 type.go:168] "Request Body" body=""
	I1006 14:24:44.186497  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:44.186840  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:44.686503  649678 type.go:168] "Request Body" body=""
	I1006 14:24:44.686579  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:44.686908  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:44.686976  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:45.186546  649678 type.go:168] "Request Body" body=""
	I1006 14:24:45.186633  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:45.186973  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:45.686633  649678 type.go:168] "Request Body" body=""
	I1006 14:24:45.686722  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:45.687066  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:46.186715  649678 type.go:168] "Request Body" body=""
	I1006 14:24:46.186798  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:46.187164  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:46.686921  649678 type.go:168] "Request Body" body=""
	I1006 14:24:46.687008  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:46.687441  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:46.687511  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:47.186093  649678 type.go:168] "Request Body" body=""
	I1006 14:24:47.186175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:47.186548  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:47.686128  649678 type.go:168] "Request Body" body=""
	I1006 14:24:47.686233  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:47.686613  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:48.186260  649678 type.go:168] "Request Body" body=""
	I1006 14:24:48.186345  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:48.186715  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:48.686317  649678 type.go:168] "Request Body" body=""
	I1006 14:24:48.686416  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:48.686787  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:49.186383  649678 type.go:168] "Request Body" body=""
	I1006 14:24:49.186483  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:49.186862  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:49.186934  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:49.686547  649678 type.go:168] "Request Body" body=""
	I1006 14:24:49.686630  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:49.687018  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:50.186932  649678 type.go:168] "Request Body" body=""
	I1006 14:24:50.187020  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:50.187392  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:50.685995  649678 type.go:168] "Request Body" body=""
	I1006 14:24:50.686087  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:50.686639  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:51.186241  649678 type.go:168] "Request Body" body=""
	I1006 14:24:51.186321  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:51.186677  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:51.686524  649678 type.go:168] "Request Body" body=""
	I1006 14:24:51.686604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:51.686971  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:51.687045  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:52.186636  649678 type.go:168] "Request Body" body=""
	I1006 14:24:52.186724  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:52.187108  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:52.686753  649678 type.go:168] "Request Body" body=""
	I1006 14:24:52.686831  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:52.687267  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:53.185896  649678 type.go:168] "Request Body" body=""
	I1006 14:24:53.185979  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:53.186366  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:53.685914  649678 type.go:168] "Request Body" body=""
	I1006 14:24:53.685990  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:53.686334  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:54.185922  649678 type.go:168] "Request Body" body=""
	I1006 14:24:54.186002  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:54.186408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:54.186489  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:54.685967  649678 type.go:168] "Request Body" body=""
	I1006 14:24:54.686051  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:54.686451  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:55.186040  649678 type.go:168] "Request Body" body=""
	I1006 14:24:55.186122  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:55.186477  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:55.686036  649678 type.go:168] "Request Body" body=""
	I1006 14:24:55.686113  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:55.686480  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:56.186026  649678 type.go:168] "Request Body" body=""
	I1006 14:24:56.186104  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:56.186478  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:56.186550  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:56.686248  649678 type.go:168] "Request Body" body=""
	I1006 14:24:56.686329  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:56.686693  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:57.186234  649678 type.go:168] "Request Body" body=""
	I1006 14:24:57.186315  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:57.186630  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:57.686283  649678 type.go:168] "Request Body" body=""
	I1006 14:24:57.686402  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:57.686814  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:58.186365  649678 type.go:168] "Request Body" body=""
	I1006 14:24:58.186450  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:58.186794  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:24:58.186858  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:24:58.686485  649678 type.go:168] "Request Body" body=""
	I1006 14:24:58.686625  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:58.687000  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:59.186645  649678 type.go:168] "Request Body" body=""
	I1006 14:24:59.186728  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:59.187067  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:24:59.686701  649678 type.go:168] "Request Body" body=""
	I1006 14:24:59.686778  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:24:59.687158  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:00.185971  649678 type.go:168] "Request Body" body=""
	I1006 14:25:00.186051  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:00.186405  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:00.686037  649678 type.go:168] "Request Body" body=""
	I1006 14:25:00.686117  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:00.686528  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:00.686606  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:01.186098  649678 type.go:168] "Request Body" body=""
	I1006 14:25:01.186186  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:01.186639  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:01.686574  649678 type.go:168] "Request Body" body=""
	I1006 14:25:01.686664  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:01.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:02.186731  649678 type.go:168] "Request Body" body=""
	I1006 14:25:02.186819  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:02.187259  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:02.685880  649678 type.go:168] "Request Body" body=""
	I1006 14:25:02.685972  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:02.686460  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:03.186037  649678 type.go:168] "Request Body" body=""
	I1006 14:25:03.186117  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:03.186526  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:03.186595  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:03.686186  649678 type.go:168] "Request Body" body=""
	I1006 14:25:03.686282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:03.686638  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:04.186251  649678 type.go:168] "Request Body" body=""
	I1006 14:25:04.186325  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:04.186672  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:04.686261  649678 type.go:168] "Request Body" body=""
	I1006 14:25:04.686346  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:04.686697  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:05.186293  649678 type.go:168] "Request Body" body=""
	I1006 14:25:05.186374  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:05.186780  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:05.186857  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:05.686332  649678 type.go:168] "Request Body" body=""
	I1006 14:25:05.686416  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:05.686772  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:06.186370  649678 type.go:168] "Request Body" body=""
	I1006 14:25:06.186449  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:06.186819  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:06.686670  649678 type.go:168] "Request Body" body=""
	I1006 14:25:06.686749  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:06.687114  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:07.186765  649678 type.go:168] "Request Body" body=""
	I1006 14:25:07.186854  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:07.187255  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:07.187328  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:07.686866  649678 type.go:168] "Request Body" body=""
	I1006 14:25:07.686945  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:07.687337  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:08.185991  649678 type.go:168] "Request Body" body=""
	I1006 14:25:08.186073  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:08.186473  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:08.686026  649678 type.go:168] "Request Body" body=""
	I1006 14:25:08.686101  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:08.686467  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:09.186027  649678 type.go:168] "Request Body" body=""
	I1006 14:25:09.186117  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:09.186491  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:09.686131  649678 type.go:168] "Request Body" body=""
	I1006 14:25:09.686218  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:09.686554  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:09.686624  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:10.186421  649678 type.go:168] "Request Body" body=""
	I1006 14:25:10.186509  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:10.186885  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:10.686589  649678 type.go:168] "Request Body" body=""
	I1006 14:25:10.686673  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:10.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:11.186451  649678 type.go:168] "Request Body" body=""
	I1006 14:25:11.186534  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:11.186908  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:11.686874  649678 type.go:168] "Request Body" body=""
	I1006 14:25:11.686958  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:11.687404  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:11.687478  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:12.186004  649678 type.go:168] "Request Body" body=""
	I1006 14:25:12.186089  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:12.186488  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:12.686071  649678 type.go:168] "Request Body" body=""
	I1006 14:25:12.686175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:12.686583  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:13.186311  649678 type.go:168] "Request Body" body=""
	I1006 14:25:13.186394  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:13.186794  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:13.686469  649678 type.go:168] "Request Body" body=""
	I1006 14:25:13.686560  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:13.686955  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:14.186674  649678 type.go:168] "Request Body" body=""
	I1006 14:25:14.186764  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:14.187198  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:14.187305  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:14.686830  649678 type.go:168] "Request Body" body=""
	I1006 14:25:14.686915  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:14.687318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:15.185883  649678 type.go:168] "Request Body" body=""
	I1006 14:25:15.185963  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:15.186381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:15.685988  649678 type.go:168] "Request Body" body=""
	I1006 14:25:15.686075  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:15.686471  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:16.186057  649678 type.go:168] "Request Body" body=""
	I1006 14:25:16.186159  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:16.186628  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:16.686506  649678 type.go:168] "Request Body" body=""
	I1006 14:25:16.686586  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:16.686922  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:16.686991  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:17.186686  649678 type.go:168] "Request Body" body=""
	I1006 14:25:17.186779  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:17.187190  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:17.686871  649678 type.go:168] "Request Body" body=""
	I1006 14:25:17.686958  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:17.687378  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:18.185930  649678 type.go:168] "Request Body" body=""
	I1006 14:25:18.186011  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:18.186362  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:18.686006  649678 type.go:168] "Request Body" body=""
	I1006 14:25:18.686091  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:18.686522  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:19.186154  649678 type.go:168] "Request Body" body=""
	I1006 14:25:19.186270  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:19.186661  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:19.186738  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:19.686272  649678 type.go:168] "Request Body" body=""
	I1006 14:25:19.686357  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:19.686722  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:20.186620  649678 type.go:168] "Request Body" body=""
	I1006 14:25:20.186712  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:20.187085  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:20.686732  649678 type.go:168] "Request Body" body=""
	I1006 14:25:20.686813  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:20.687200  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:21.186886  649678 type.go:168] "Request Body" body=""
	I1006 14:25:21.186971  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:21.187421  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:21.187498  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:21.686192  649678 type.go:168] "Request Body" body=""
	I1006 14:25:21.686313  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:21.686703  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:22.186337  649678 type.go:168] "Request Body" body=""
	I1006 14:25:22.186443  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:22.186816  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:22.686392  649678 type.go:168] "Request Body" body=""
	I1006 14:25:22.686470  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:22.686872  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:23.186538  649678 type.go:168] "Request Body" body=""
	I1006 14:25:23.186623  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:23.186990  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:23.686645  649678 type.go:168] "Request Body" body=""
	I1006 14:25:23.686745  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:23.687147  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:23.687255  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:24.186838  649678 type.go:168] "Request Body" body=""
	I1006 14:25:24.186917  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:24.187309  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:24.685862  649678 type.go:168] "Request Body" body=""
	I1006 14:25:24.685944  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:24.686370  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:25.185903  649678 type.go:168] "Request Body" body=""
	I1006 14:25:25.185979  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:25.186373  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:25.685951  649678 type.go:168] "Request Body" body=""
	I1006 14:25:25.686032  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:25.686450  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:26.186018  649678 type.go:168] "Request Body" body=""
	I1006 14:25:26.186098  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:26.186497  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:26.186566  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:26.686293  649678 type.go:168] "Request Body" body=""
	I1006 14:25:26.686378  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:26.686746  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:27.186364  649678 type.go:168] "Request Body" body=""
	I1006 14:25:27.186454  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:27.186827  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:27.686418  649678 type.go:168] "Request Body" body=""
	I1006 14:25:27.686503  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:27.686844  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:28.186581  649678 type.go:168] "Request Body" body=""
	I1006 14:25:28.186676  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:28.187085  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:28.187196  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:28.686665  649678 type.go:168] "Request Body" body=""
	I1006 14:25:28.686737  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:28.687051  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:29.186712  649678 type.go:168] "Request Body" body=""
	I1006 14:25:29.186801  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:29.187161  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:29.685861  649678 type.go:168] "Request Body" body=""
	I1006 14:25:29.685951  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:29.686323  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:30.186241  649678 type.go:168] "Request Body" body=""
	I1006 14:25:30.186336  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:30.186725  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:30.686347  649678 type.go:168] "Request Body" body=""
	I1006 14:25:30.686438  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:30.686799  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:30.686867  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:31.186356  649678 type.go:168] "Request Body" body=""
	I1006 14:25:31.186436  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:31.186790  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:31.686720  649678 type.go:168] "Request Body" body=""
	I1006 14:25:31.686801  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:31.687239  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:32.186431  649678 type.go:168] "Request Body" body=""
	I1006 14:25:32.186515  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:32.186873  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:32.686520  649678 type.go:168] "Request Body" body=""
	I1006 14:25:32.686601  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:32.686977  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:32.687047  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:33.186626  649678 type.go:168] "Request Body" body=""
	I1006 14:25:33.186710  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:33.187075  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:33.686716  649678 type.go:168] "Request Body" body=""
	I1006 14:25:33.686805  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:33.687167  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:34.186823  649678 type.go:168] "Request Body" body=""
	I1006 14:25:34.186903  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:34.187273  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:34.685846  649678 type.go:168] "Request Body" body=""
	I1006 14:25:34.685928  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:34.686316  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:35.185913  649678 type.go:168] "Request Body" body=""
	I1006 14:25:35.186011  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:35.186468  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:35.186536  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:35.686056  649678 type.go:168] "Request Body" body=""
	I1006 14:25:35.686142  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:35.686600  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:36.186122  649678 type.go:168] "Request Body" body=""
	I1006 14:25:36.186200  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:36.186601  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:36.686430  649678 type.go:168] "Request Body" body=""
	I1006 14:25:36.686510  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:36.686854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:37.186453  649678 type.go:168] "Request Body" body=""
	I1006 14:25:37.186544  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:37.186881  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:37.186946  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:37.686555  649678 type.go:168] "Request Body" body=""
	I1006 14:25:37.686635  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:37.686983  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:38.186591  649678 type.go:168] "Request Body" body=""
	I1006 14:25:38.186672  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:38.187012  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:38.686677  649678 type.go:168] "Request Body" body=""
	I1006 14:25:38.686752  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:38.687074  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:39.186406  649678 type.go:168] "Request Body" body=""
	I1006 14:25:39.186486  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:39.186779  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:39.686380  649678 type.go:168] "Request Body" body=""
	I1006 14:25:39.686456  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:39.686788  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:39.686849  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:40.186552  649678 type.go:168] "Request Body" body=""
	I1006 14:25:40.186636  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:40.186983  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:40.686686  649678 type.go:168] "Request Body" body=""
	I1006 14:25:40.686772  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:40.687136  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:41.186786  649678 type.go:168] "Request Body" body=""
	I1006 14:25:41.186883  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:41.187296  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:41.686115  649678 type.go:168] "Request Body" body=""
	I1006 14:25:41.686197  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:41.686611  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:42.186247  649678 type.go:168] "Request Body" body=""
	I1006 14:25:42.186349  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:42.186752  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:42.186818  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:42.686348  649678 type.go:168] "Request Body" body=""
	I1006 14:25:42.686429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:42.686809  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:43.186383  649678 type.go:168] "Request Body" body=""
	I1006 14:25:43.186476  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:43.186825  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:43.686373  649678 type.go:168] "Request Body" body=""
	I1006 14:25:43.686447  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:43.686785  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:44.186380  649678 type.go:168] "Request Body" body=""
	I1006 14:25:44.186471  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:44.186817  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:44.186878  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:44.686508  649678 type.go:168] "Request Body" body=""
	I1006 14:25:44.686586  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:44.686949  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:45.186631  649678 type.go:168] "Request Body" body=""
	I1006 14:25:45.186709  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:45.187070  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:45.686683  649678 type.go:168] "Request Body" body=""
	I1006 14:25:45.686760  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:45.687117  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:46.186771  649678 type.go:168] "Request Body" body=""
	I1006 14:25:46.186856  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:46.187161  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:46.187239  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:46.685960  649678 type.go:168] "Request Body" body=""
	I1006 14:25:46.686053  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:46.686491  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:47.186117  649678 type.go:168] "Request Body" body=""
	I1006 14:25:47.186232  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:47.186563  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:47.686262  649678 type.go:168] "Request Body" body=""
	I1006 14:25:47.686353  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:47.686735  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:48.186344  649678 type.go:168] "Request Body" body=""
	I1006 14:25:48.186436  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:48.186775  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:48.686380  649678 type.go:168] "Request Body" body=""
	I1006 14:25:48.686477  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:48.686837  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:48.686901  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:49.186520  649678 type.go:168] "Request Body" body=""
	I1006 14:25:49.186599  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:49.186960  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:49.686576  649678 type.go:168] "Request Body" body=""
	I1006 14:25:49.686696  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:49.687078  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:50.186881  649678 type.go:168] "Request Body" body=""
	I1006 14:25:50.186973  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:50.187437  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:50.685990  649678 type.go:168] "Request Body" body=""
	I1006 14:25:50.686074  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:50.686473  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:51.186300  649678 type.go:168] "Request Body" body=""
	I1006 14:25:51.186379  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:51.186743  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:51.186811  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:51.686703  649678 type.go:168] "Request Body" body=""
	I1006 14:25:51.686798  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:51.687173  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:52.186898  649678 type.go:168] "Request Body" body=""
	I1006 14:25:52.186995  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:52.187412  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:52.686051  649678 type.go:168] "Request Body" body=""
	I1006 14:25:52.686131  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:52.686542  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:53.186148  649678 type.go:168] "Request Body" body=""
	I1006 14:25:53.186271  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:53.186618  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:53.686257  649678 type.go:168] "Request Body" body=""
	I1006 14:25:53.686333  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:53.686629  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:53.686692  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:54.186270  649678 type.go:168] "Request Body" body=""
	I1006 14:25:54.186349  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:54.186708  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:54.686271  649678 type.go:168] "Request Body" body=""
	I1006 14:25:54.686350  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:54.686763  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:55.186342  649678 type.go:168] "Request Body" body=""
	I1006 14:25:55.186429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:55.186784  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:55.686364  649678 type.go:168] "Request Body" body=""
	I1006 14:25:55.686460  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:55.686892  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:55.686972  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:56.186543  649678 type.go:168] "Request Body" body=""
	I1006 14:25:56.186621  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:56.186957  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:56.686715  649678 type.go:168] "Request Body" body=""
	I1006 14:25:56.686790  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:56.687141  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:57.186851  649678 type.go:168] "Request Body" body=""
	I1006 14:25:57.186936  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:57.187306  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:57.686906  649678 type.go:168] "Request Body" body=""
	I1006 14:25:57.686983  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:57.687342  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:25:57.687412  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:25:58.185932  649678 type.go:168] "Request Body" body=""
	I1006 14:25:58.186017  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:58.186400  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:58.685929  649678 type.go:168] "Request Body" body=""
	I1006 14:25:58.686006  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:58.686337  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:59.185922  649678 type.go:168] "Request Body" body=""
	I1006 14:25:59.186001  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:59.186386  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:25:59.685924  649678 type.go:168] "Request Body" body=""
	I1006 14:25:59.686004  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:25:59.686375  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:00.186296  649678 type.go:168] "Request Body" body=""
	I1006 14:26:00.186378  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:00.186687  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:00.186765  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:00.686277  649678 type.go:168] "Request Body" body=""
	I1006 14:26:00.686360  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:00.686729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:01.186343  649678 type.go:168] "Request Body" body=""
	I1006 14:26:01.186429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:01.186799  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:01.686640  649678 type.go:168] "Request Body" body=""
	I1006 14:26:01.686731  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:01.687113  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:02.186812  649678 type.go:168] "Request Body" body=""
	I1006 14:26:02.186901  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:02.187298  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:02.187363  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:02.686912  649678 type.go:168] "Request Body" body=""
	I1006 14:26:02.686991  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:02.687387  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:03.186002  649678 type.go:168] "Request Body" body=""
	I1006 14:26:03.186084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:03.186473  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:03.685977  649678 type.go:168] "Request Body" body=""
	I1006 14:26:03.686048  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:03.686381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:04.185981  649678 type.go:168] "Request Body" body=""
	I1006 14:26:04.186057  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:04.186423  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:04.685971  649678 type.go:168] "Request Body" body=""
	I1006 14:26:04.686060  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:04.686445  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:04.686508  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:05.186070  649678 type.go:168] "Request Body" body=""
	I1006 14:26:05.186157  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:05.186570  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:05.686148  649678 type.go:168] "Request Body" body=""
	I1006 14:26:05.686264  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:05.686629  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:06.186273  649678 type.go:168] "Request Body" body=""
	I1006 14:26:06.186358  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:06.186714  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:06.686539  649678 type.go:168] "Request Body" body=""
	I1006 14:26:06.686626  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:06.686991  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:06.687057  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:07.186691  649678 type.go:168] "Request Body" body=""
	I1006 14:26:07.186766  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:07.187071  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:07.686715  649678 type.go:168] "Request Body" body=""
	I1006 14:26:07.686797  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:07.687168  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:08.186877  649678 type.go:168] "Request Body" body=""
	I1006 14:26:08.186969  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:08.187376  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:08.685874  649678 type.go:168] "Request Body" body=""
	I1006 14:26:08.685947  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:08.686343  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:09.185901  649678 type.go:168] "Request Body" body=""
	I1006 14:26:09.185986  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:09.186361  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:09.186422  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:09.685934  649678 type.go:168] "Request Body" body=""
	I1006 14:26:09.686008  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:09.686381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:10.186337  649678 type.go:168] "Request Body" body=""
	I1006 14:26:10.186418  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:10.186799  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:10.686458  649678 type.go:168] "Request Body" body=""
	I1006 14:26:10.686543  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:10.686962  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:11.186624  649678 type.go:168] "Request Body" body=""
	I1006 14:26:11.186717  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:11.187101  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:11.187175  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:11.685850  649678 type.go:168] "Request Body" body=""
	I1006 14:26:11.685927  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:11.686323  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:12.185918  649678 type.go:168] "Request Body" body=""
	I1006 14:26:12.185998  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:12.186408  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:12.686005  649678 type.go:168] "Request Body" body=""
	I1006 14:26:12.686089  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:12.686517  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:13.186107  649678 type.go:168] "Request Body" body=""
	I1006 14:26:13.186230  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:13.186588  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:13.686197  649678 type.go:168] "Request Body" body=""
	I1006 14:26:13.686355  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:13.686711  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:13.686772  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:14.186309  649678 type.go:168] "Request Body" body=""
	I1006 14:26:14.186392  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:14.186749  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:14.686366  649678 type.go:168] "Request Body" body=""
	I1006 14:26:14.686450  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:14.686778  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:15.185991  649678 type.go:168] "Request Body" body=""
	I1006 14:26:15.186103  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:15.186529  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:15.686135  649678 type.go:168] "Request Body" body=""
	I1006 14:26:15.686243  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:15.686610  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:16.186323  649678 type.go:168] "Request Body" body=""
	I1006 14:26:16.186429  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:16.186768  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:16.186838  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:16.686609  649678 type.go:168] "Request Body" body=""
	I1006 14:26:16.686694  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:16.687041  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:17.186702  649678 type.go:168] "Request Body" body=""
	I1006 14:26:17.186792  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:17.187231  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:17.686866  649678 type.go:168] "Request Body" body=""
	I1006 14:26:17.686950  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:17.687324  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:18.185952  649678 type.go:168] "Request Body" body=""
	I1006 14:26:18.186030  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:18.186428  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:18.685978  649678 type.go:168] "Request Body" body=""
	I1006 14:26:18.686051  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:18.686440  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:18.686507  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:19.186006  649678 type.go:168] "Request Body" body=""
	I1006 14:26:19.186087  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:19.186501  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:19.686063  649678 type.go:168] "Request Body" body=""
	I1006 14:26:19.686139  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:19.686531  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:20.186356  649678 type.go:168] "Request Body" body=""
	I1006 14:26:20.186443  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:20.186802  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:20.686408  649678 type.go:168] "Request Body" body=""
	I1006 14:26:20.686495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:20.686850  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:20.686922  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:21.186511  649678 type.go:168] "Request Body" body=""
	I1006 14:26:21.186587  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:21.186942  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:21.686813  649678 type.go:168] "Request Body" body=""
	I1006 14:26:21.686900  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:21.687313  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:22.185849  649678 type.go:168] "Request Body" body=""
	I1006 14:26:22.185931  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:22.186339  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:22.685929  649678 type.go:168] "Request Body" body=""
	I1006 14:26:22.686007  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:22.686413  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:23.186016  649678 type.go:168] "Request Body" body=""
	I1006 14:26:23.186102  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:23.186494  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:23.186565  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:23.686035  649678 type.go:168] "Request Body" body=""
	I1006 14:26:23.686107  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:23.686482  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:24.186086  649678 type.go:168] "Request Body" body=""
	I1006 14:26:24.186175  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:24.186554  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:24.686126  649678 type.go:168] "Request Body" body=""
	I1006 14:26:24.686237  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:24.686577  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:25.186280  649678 type.go:168] "Request Body" body=""
	I1006 14:26:25.186363  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:25.186729  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:25.186793  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:25.686357  649678 type.go:168] "Request Body" body=""
	I1006 14:26:25.686450  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:25.686832  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:26.186509  649678 type.go:168] "Request Body" body=""
	I1006 14:26:26.186599  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:26.186933  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:26.686731  649678 type.go:168] "Request Body" body=""
	I1006 14:26:26.686807  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:26.687178  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:27.186830  649678 type.go:168] "Request Body" body=""
	I1006 14:26:27.186916  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:27.187303  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:27.187367  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:27.685989  649678 type.go:168] "Request Body" body=""
	I1006 14:26:27.686079  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:27.686515  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:28.186104  649678 type.go:168] "Request Body" body=""
	I1006 14:26:28.186234  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:28.186665  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:28.686340  649678 type.go:168] "Request Body" body=""
	I1006 14:26:28.686435  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:28.686828  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:29.186495  649678 type.go:168] "Request Body" body=""
	I1006 14:26:29.186583  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:29.186957  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:29.686668  649678 type.go:168] "Request Body" body=""
	I1006 14:26:29.686747  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:29.687084  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:29.687155  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:30.185982  649678 type.go:168] "Request Body" body=""
	I1006 14:26:30.186084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:30.186533  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:30.686149  649678 type.go:168] "Request Body" body=""
	I1006 14:26:30.686258  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:30.686621  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:31.186197  649678 type.go:168] "Request Body" body=""
	I1006 14:26:31.186328  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:31.186681  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:31.686544  649678 type.go:168] "Request Body" body=""
	I1006 14:26:31.686625  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:31.687002  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:32.186625  649678 type.go:168] "Request Body" body=""
	I1006 14:26:32.186728  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:32.187110  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:32.187243  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:32.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:26:32.686849  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:32.687250  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:33.185866  649678 type.go:168] "Request Body" body=""
	I1006 14:26:33.185966  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:33.186401  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:33.685998  649678 type.go:168] "Request Body" body=""
	I1006 14:26:33.686076  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:33.686491  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:34.186036  649678 type.go:168] "Request Body" body=""
	I1006 14:26:34.186137  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:34.186537  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:34.686069  649678 type.go:168] "Request Body" body=""
	I1006 14:26:34.686144  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:34.686500  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:34.686564  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:35.186170  649678 type.go:168] "Request Body" body=""
	I1006 14:26:35.186296  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:35.186675  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:35.686291  649678 type.go:168] "Request Body" body=""
	I1006 14:26:35.686375  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:35.686758  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:36.186396  649678 type.go:168] "Request Body" body=""
	I1006 14:26:36.186499  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:36.186883  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:36.686651  649678 type.go:168] "Request Body" body=""
	I1006 14:26:36.686732  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:36.687079  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:36.687145  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:37.186756  649678 type.go:168] "Request Body" body=""
	I1006 14:26:37.186868  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:37.187300  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:37.685900  649678 type.go:168] "Request Body" body=""
	I1006 14:26:37.686015  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:37.686475  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:38.186110  649678 type.go:168] "Request Body" body=""
	I1006 14:26:38.186226  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:38.186598  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:38.686176  649678 type.go:168] "Request Body" body=""
	I1006 14:26:38.686303  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:38.686658  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:39.186240  649678 type.go:168] "Request Body" body=""
	I1006 14:26:39.186320  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:39.186682  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:39.186749  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:39.686298  649678 type.go:168] "Request Body" body=""
	I1006 14:26:39.686387  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:39.686746  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:40.186587  649678 type.go:168] "Request Body" body=""
	I1006 14:26:40.186667  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:40.187038  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:40.686696  649678 type.go:168] "Request Body" body=""
	I1006 14:26:40.686801  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:40.687169  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:41.186829  649678 type.go:168] "Request Body" body=""
	I1006 14:26:41.186908  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:41.187312  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:41.187383  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:41.686029  649678 type.go:168] "Request Body" body=""
	I1006 14:26:41.686108  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:41.686522  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:42.186071  649678 type.go:168] "Request Body" body=""
	I1006 14:26:42.186168  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:42.186549  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:42.686104  649678 type.go:168] "Request Body" body=""
	I1006 14:26:42.686190  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:42.686575  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:43.186140  649678 type.go:168] "Request Body" body=""
	I1006 14:26:43.186255  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:43.186605  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:43.686244  649678 type.go:168] "Request Body" body=""
	I1006 14:26:43.686321  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:43.686657  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:43.686731  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:44.186303  649678 type.go:168] "Request Body" body=""
	I1006 14:26:44.186390  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:44.186758  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:44.686323  649678 type.go:168] "Request Body" body=""
	I1006 14:26:44.686402  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:44.686737  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:45.186332  649678 type.go:168] "Request Body" body=""
	I1006 14:26:45.186410  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:45.186776  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:45.686331  649678 type.go:168] "Request Body" body=""
	I1006 14:26:45.686415  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:45.686779  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:45.686856  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:46.186339  649678 type.go:168] "Request Body" body=""
	I1006 14:26:46.186430  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:46.186785  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:46.686621  649678 type.go:168] "Request Body" body=""
	I1006 14:26:46.686715  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:46.687061  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:47.186713  649678 type.go:168] "Request Body" body=""
	I1006 14:26:47.186815  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:47.187185  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:47.686868  649678 type.go:168] "Request Body" body=""
	I1006 14:26:47.686957  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:47.687305  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:47.687372  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:48.185956  649678 type.go:168] "Request Body" body=""
	I1006 14:26:48.186058  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:48.186446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:48.686113  649678 type.go:168] "Request Body" body=""
	I1006 14:26:48.686236  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:48.686589  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:49.186156  649678 type.go:168] "Request Body" body=""
	I1006 14:26:49.186290  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:49.186679  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:49.686186  649678 type.go:168] "Request Body" body=""
	I1006 14:26:49.686282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:49.686588  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:50.186404  649678 type.go:168] "Request Body" body=""
	I1006 14:26:50.186506  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:50.186917  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:50.186990  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:50.686607  649678 type.go:168] "Request Body" body=""
	I1006 14:26:50.686695  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:50.687128  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:51.186788  649678 type.go:168] "Request Body" body=""
	I1006 14:26:51.186968  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:51.187381  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:51.686169  649678 type.go:168] "Request Body" body=""
	I1006 14:26:51.686282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:51.686666  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:52.186376  649678 type.go:168] "Request Body" body=""
	I1006 14:26:52.186493  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:52.186854  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:52.686550  649678 type.go:168] "Request Body" body=""
	I1006 14:26:52.686631  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:52.686915  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:52.686968  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:53.186633  649678 type.go:168] "Request Body" body=""
	I1006 14:26:53.186732  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:53.187095  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:53.686774  649678 type.go:168] "Request Body" body=""
	I1006 14:26:53.686871  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:53.687310  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:54.185884  649678 type.go:168] "Request Body" body=""
	I1006 14:26:54.185972  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:54.186391  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:54.685933  649678 type.go:168] "Request Body" body=""
	I1006 14:26:54.686006  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:54.686391  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:55.186064  649678 type.go:168] "Request Body" body=""
	I1006 14:26:55.186180  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:55.186574  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:55.186642  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:55.686159  649678 type.go:168] "Request Body" body=""
	I1006 14:26:55.686263  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:55.686668  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:56.186304  649678 type.go:168] "Request Body" body=""
	I1006 14:26:56.186418  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:56.186815  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:56.686705  649678 type.go:168] "Request Body" body=""
	I1006 14:26:56.686789  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:56.687169  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:57.186778  649678 type.go:168] "Request Body" body=""
	I1006 14:26:57.186869  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:57.187240  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:57.187304  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:26:57.685924  649678 type.go:168] "Request Body" body=""
	I1006 14:26:57.686000  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:57.686362  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:58.185951  649678 type.go:168] "Request Body" body=""
	I1006 14:26:58.186045  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:58.186445  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:58.685995  649678 type.go:168] "Request Body" body=""
	I1006 14:26:58.686071  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:58.686437  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:59.186003  649678 type.go:168] "Request Body" body=""
	I1006 14:26:59.186190  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:59.186571  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:26:59.686153  649678 type.go:168] "Request Body" body=""
	I1006 14:26:59.686257  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:26:59.686662  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:26:59.686725  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:00.186605  649678 type.go:168] "Request Body" body=""
	I1006 14:27:00.186714  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:00.187091  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:00.686763  649678 type.go:168] "Request Body" body=""
	I1006 14:27:00.686859  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:00.687243  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:01.186928  649678 type.go:168] "Request Body" body=""
	I1006 14:27:01.187012  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:01.187398  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:01.686308  649678 type.go:168] "Request Body" body=""
	I1006 14:27:01.686391  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:01.686761  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:01.686839  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:02.186358  649678 type.go:168] "Request Body" body=""
	I1006 14:27:02.186439  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:02.186809  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:02.686423  649678 type.go:168] "Request Body" body=""
	I1006 14:27:02.686509  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:02.686907  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:03.186590  649678 type.go:168] "Request Body" body=""
	I1006 14:27:03.186676  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:03.187035  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:03.686678  649678 type.go:168] "Request Body" body=""
	I1006 14:27:03.686764  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:03.687130  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:03.687245  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:04.186807  649678 type.go:168] "Request Body" body=""
	I1006 14:27:04.186891  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:04.187266  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:04.686913  649678 type.go:168] "Request Body" body=""
	I1006 14:27:04.686987  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:04.687327  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:05.185951  649678 type.go:168] "Request Body" body=""
	I1006 14:27:05.186036  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:05.186442  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:05.685992  649678 type.go:168] "Request Body" body=""
	I1006 14:27:05.686068  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:05.686436  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:06.186013  649678 type.go:168] "Request Body" body=""
	I1006 14:27:06.186094  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:06.186496  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:06.186569  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:06.686265  649678 type.go:168] "Request Body" body=""
	I1006 14:27:06.686367  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:06.686740  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:07.186336  649678 type.go:168] "Request Body" body=""
	I1006 14:27:07.186417  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:07.186760  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:07.686331  649678 type.go:168] "Request Body" body=""
	I1006 14:27:07.686437  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:07.686806  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:08.186436  649678 type.go:168] "Request Body" body=""
	I1006 14:27:08.186520  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:08.186903  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:08.186969  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:08.686610  649678 type.go:168] "Request Body" body=""
	I1006 14:27:08.686699  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:08.687059  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:09.186699  649678 type.go:168] "Request Body" body=""
	I1006 14:27:09.186792  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:09.187140  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:09.686782  649678 type.go:168] "Request Body" body=""
	I1006 14:27:09.686873  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:09.687256  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:10.185990  649678 type.go:168] "Request Body" body=""
	I1006 14:27:10.186073  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:10.186441  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:10.686081  649678 type.go:168] "Request Body" body=""
	I1006 14:27:10.686241  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:10.686611  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:10.686681  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:11.186246  649678 type.go:168] "Request Body" body=""
	I1006 14:27:11.186326  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:11.186676  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:11.686547  649678 type.go:168] "Request Body" body=""
	I1006 14:27:11.686634  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:11.686982  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:12.186629  649678 type.go:168] "Request Body" body=""
	I1006 14:27:12.186708  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:12.187095  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:12.686714  649678 type.go:168] "Request Body" body=""
	I1006 14:27:12.686808  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:12.687182  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:12.687301  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:13.186802  649678 type.go:168] "Request Body" body=""
	I1006 14:27:13.186882  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:13.187293  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:13.686883  649678 type.go:168] "Request Body" body=""
	I1006 14:27:13.686963  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:13.687307  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:14.185879  649678 type.go:168] "Request Body" body=""
	I1006 14:27:14.185967  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:14.186371  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:14.685892  649678 type.go:168] "Request Body" body=""
	I1006 14:27:14.685968  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:14.686306  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:15.185837  649678 type.go:168] "Request Body" body=""
	I1006 14:27:15.185912  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:15.186295  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:15.186372  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:15.685893  649678 type.go:168] "Request Body" body=""
	I1006 14:27:15.685969  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:15.686294  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:16.185990  649678 type.go:168] "Request Body" body=""
	I1006 14:27:16.186081  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:16.186492  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:16.686393  649678 type.go:168] "Request Body" body=""
	I1006 14:27:16.686478  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:16.686834  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:17.186384  649678 type.go:168] "Request Body" body=""
	I1006 14:27:17.186479  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:17.186834  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:17.186910  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:17.686523  649678 type.go:168] "Request Body" body=""
	I1006 14:27:17.686606  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:17.686989  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:18.186641  649678 type.go:168] "Request Body" body=""
	I1006 14:27:18.186739  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:18.187119  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:18.686755  649678 type.go:168] "Request Body" body=""
	I1006 14:27:18.686840  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:18.687189  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:19.186887  649678 type.go:168] "Request Body" body=""
	I1006 14:27:19.186975  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:19.187444  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:19.187516  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:19.686032  649678 type.go:168] "Request Body" body=""
	I1006 14:27:19.686111  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:19.686551  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:20.186447  649678 type.go:168] "Request Body" body=""
	I1006 14:27:20.186532  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:20.186905  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:20.686572  649678 type.go:168] "Request Body" body=""
	I1006 14:27:20.686660  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:20.687016  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:21.186692  649678 type.go:168] "Request Body" body=""
	I1006 14:27:21.186778  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:21.187150  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:21.685991  649678 type.go:168] "Request Body" body=""
	I1006 14:27:21.686073  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:21.686471  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:21.686536  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:22.186060  649678 type.go:168] "Request Body" body=""
	I1006 14:27:22.186159  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:22.186562  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:22.686161  649678 type.go:168] "Request Body" body=""
	I1006 14:27:22.686270  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:22.686631  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:23.186276  649678 type.go:168] "Request Body" body=""
	I1006 14:27:23.186365  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:23.186747  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:23.686349  649678 type.go:168] "Request Body" body=""
	I1006 14:27:23.686435  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:23.686810  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:23.686876  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:24.186408  649678 type.go:168] "Request Body" body=""
	I1006 14:27:24.186497  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:24.186870  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:24.686536  649678 type.go:168] "Request Body" body=""
	I1006 14:27:24.686611  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:24.686963  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:25.186632  649678 type.go:168] "Request Body" body=""
	I1006 14:27:25.186708  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:25.187049  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:25.686802  649678 type.go:168] "Request Body" body=""
	I1006 14:27:25.686882  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:25.687264  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:25.687322  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:26.185898  649678 type.go:168] "Request Body" body=""
	I1006 14:27:26.185976  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:26.186375  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:26.686124  649678 type.go:168] "Request Body" body=""
	I1006 14:27:26.686235  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:26.686552  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:27.186223  649678 type.go:168] "Request Body" body=""
	I1006 14:27:27.186300  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:27.186673  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:27.686275  649678 type.go:168] "Request Body" body=""
	I1006 14:27:27.686364  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:27.686719  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:28.186345  649678 type.go:168] "Request Body" body=""
	I1006 14:27:28.186434  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:28.186796  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:28.186861  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:28.686407  649678 type.go:168] "Request Body" body=""
	I1006 14:27:28.686495  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:28.686858  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:29.186569  649678 type.go:168] "Request Body" body=""
	I1006 14:27:29.186651  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:29.187026  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:29.686656  649678 type.go:168] "Request Body" body=""
	I1006 14:27:29.686728  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:29.687080  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:30.185993  649678 type.go:168] "Request Body" body=""
	I1006 14:27:30.186084  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:30.186484  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:30.686077  649678 type.go:168] "Request Body" body=""
	I1006 14:27:30.686155  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:30.686554  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:30.686627  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:31.186175  649678 type.go:168] "Request Body" body=""
	I1006 14:27:31.186286  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:31.186680  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:31.686528  649678 type.go:168] "Request Body" body=""
	I1006 14:27:31.686627  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:31.687001  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:32.186675  649678 type.go:168] "Request Body" body=""
	I1006 14:27:32.186758  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:32.187124  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:32.686856  649678 type.go:168] "Request Body" body=""
	I1006 14:27:32.686942  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:32.687307  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:32.687374  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:33.185899  649678 type.go:168] "Request Body" body=""
	I1006 14:27:33.185977  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:33.186402  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:33.685994  649678 type.go:168] "Request Body" body=""
	I1006 14:27:33.686074  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:33.686482  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:34.186077  649678 type.go:168] "Request Body" body=""
	I1006 14:27:34.186156  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:34.186558  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:34.686141  649678 type.go:168] "Request Body" body=""
	I1006 14:27:34.686238  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:34.686596  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:35.186192  649678 type.go:168] "Request Body" body=""
	I1006 14:27:35.186297  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:35.186668  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:35.186738  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:35.686376  649678 type.go:168] "Request Body" body=""
	I1006 14:27:35.686471  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:35.686827  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:36.186471  649678 type.go:168] "Request Body" body=""
	I1006 14:27:36.186549  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:36.186909  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:36.686773  649678 type.go:168] "Request Body" body=""
	I1006 14:27:36.686851  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:36.687225  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:37.186866  649678 type.go:168] "Request Body" body=""
	I1006 14:27:37.186943  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:37.187324  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:37.187402  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:37.685875  649678 type.go:168] "Request Body" body=""
	I1006 14:27:37.685951  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:37.686318  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:38.185935  649678 type.go:168] "Request Body" body=""
	I1006 14:27:38.186022  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:38.186413  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:38.685990  649678 type.go:168] "Request Body" body=""
	I1006 14:27:38.686065  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:38.686446  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:39.186040  649678 type.go:168] "Request Body" body=""
	I1006 14:27:39.186119  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:39.186517  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:39.686067  649678 type.go:168] "Request Body" body=""
	I1006 14:27:39.686152  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:39.686509  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:39.686570  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:40.186335  649678 type.go:168] "Request Body" body=""
	I1006 14:27:40.186421  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:40.186798  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:40.686383  649678 type.go:168] "Request Body" body=""
	I1006 14:27:40.686477  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:40.686843  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:41.186496  649678 type.go:168] "Request Body" body=""
	I1006 14:27:41.186589  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:41.186955  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:41.686485  649678 type.go:168] "Request Body" body=""
	I1006 14:27:41.686563  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:41.686938  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:41.687005  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:42.186439  649678 type.go:168] "Request Body" body=""
	I1006 14:27:42.186523  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:42.186890  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:42.686663  649678 type.go:168] "Request Body" body=""
	I1006 14:27:42.686739  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:42.687098  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:43.186774  649678 type.go:168] "Request Body" body=""
	I1006 14:27:43.186856  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:43.187251  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:43.686855  649678 type.go:168] "Request Body" body=""
	I1006 14:27:43.686937  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:43.687333  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:43.687401  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:44.185915  649678 type.go:168] "Request Body" body=""
	I1006 14:27:44.185993  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:44.186423  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:44.685989  649678 type.go:168] "Request Body" body=""
	I1006 14:27:44.686091  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:44.686498  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:45.186085  649678 type.go:168] "Request Body" body=""
	I1006 14:27:45.186165  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:45.186565  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:45.686116  649678 type.go:168] "Request Body" body=""
	I1006 14:27:45.686239  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:45.686593  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:46.186172  649678 type.go:168] "Request Body" body=""
	I1006 14:27:46.186282  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:46.186664  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:46.186734  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:46.686523  649678 type.go:168] "Request Body" body=""
	I1006 14:27:46.686604  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:46.686968  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:47.186636  649678 type.go:168] "Request Body" body=""
	I1006 14:27:47.186712  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:47.187063  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:47.686695  649678 type.go:168] "Request Body" body=""
	I1006 14:27:47.686772  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:47.687119  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:48.186827  649678 type.go:168] "Request Body" body=""
	I1006 14:27:48.186919  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:48.187317  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1006 14:27:48.187383  649678 node_ready.go:55] error getting node "functional-135520" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-135520": dial tcp 192.168.49.2:8441: connect: connection refused
	I1006 14:27:48.685929  649678 type.go:168] "Request Body" body=""
	I1006 14:27:48.686009  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:48.686363  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:49.185988  649678 type.go:168] "Request Body" body=""
	I1006 14:27:49.186066  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:49.186471  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:49.686018  649678 type.go:168] "Request Body" body=""
	I1006 14:27:49.686094  649678 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-135520" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1006 14:27:49.686456  649678 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1006 14:27:50.186006  649678 node_ready.go:38] duration metric: took 6m0.000261558s for node "functional-135520" to be "Ready" ...
	I1006 14:27:50.189087  649678 out.go:203] 
	W1006 14:27:50.190513  649678 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1006 14:27:50.190545  649678 out.go:285] * 
	W1006 14:27:50.192353  649678 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:27:50.193614  649678 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:28:00 functional-135520 crio[2950]: time="2025-10-06T14:28:00.824420663Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=85218031-7b8c-433e-98e7-94ab0a5cb18e name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.130728219Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=21e95176-c1eb-4eac-a1c5-1b20ba3bb34f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.130852232Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=21e95176-c1eb-4eac-a1c5-1b20ba3bb34f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.130883972Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=21e95176-c1eb-4eac-a1c5-1b20ba3bb34f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.595902226Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=79489142-a558-431d-8fb7-23db9b1565ba name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.596021943Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=79489142-a558-431d-8fb7-23db9b1565ba name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.596050756Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=79489142-a558-431d-8fb7-23db9b1565ba name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.620844267Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=9e2bbeed-d602-4af9-8ab4-f9b8ab20dddb name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.620964771Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=9e2bbeed-d602-4af9-8ab4-f9b8ab20dddb name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.621003821Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=9e2bbeed-d602-4af9-8ab4-f9b8ab20dddb name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.645920535Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=35734698-0027-497c-b541-d0a0441dd042 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.646041194Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=35734698-0027-497c-b541-d0a0441dd042 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:01 functional-135520 crio[2950]: time="2025-10-06T14:28:01.646072758Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=35734698-0027-497c-b541-d0a0441dd042 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.116111529Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=8a30ea05-b08e-46c1-917b-0164344a7cc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.516234093Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=e677ac8f-d76b-4473-833d-002c35d4d82c name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.517126511Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=16949569-4475-4c02-a932-da141b5308d6 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.518121936Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-135520/kube-apiserver" id=53631056-42d2-4d65-99d6-fd09a0807f2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.518389488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.521879193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.522492085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.540395331Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=53631056-42d2-4d65-99d6-fd09a0807f2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.541809379Z" level=info msg="createCtr: deleting container ID 39a073daffb8d517b9bf89bc91f73d0ad67e3a285107108221dcddfcc68e842b from idIndex" id=53631056-42d2-4d65-99d6-fd09a0807f2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.541848123Z" level=info msg="createCtr: removing container 39a073daffb8d517b9bf89bc91f73d0ad67e3a285107108221dcddfcc68e842b" id=53631056-42d2-4d65-99d6-fd09a0807f2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.541881193Z" level=info msg="createCtr: deleting container 39a073daffb8d517b9bf89bc91f73d0ad67e3a285107108221dcddfcc68e842b from storage" id=53631056-42d2-4d65-99d6-fd09a0807f2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:28:02 functional-135520 crio[2950]: time="2025-10-06T14:28:02.54389405Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-135520_kube-system_64c921c0d544efd1faaa2d85c050bc13_0" id=53631056-42d2-4d65-99d6-fd09a0807f2a name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:28:05.650453    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:28:05.650970    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:28:05.652543    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:28:05.652985    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:28:05.654516    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:28:05 up  5:10,  0 user,  load average: 0.41, 0.37, 0.53
	Linux functional-135520 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:27:59 functional-135520 kubelet[1801]: E1006 14:27:59.551095    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:27:59 functional-135520 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:59 functional-135520 kubelet[1801]:  > podSandboxID="f122bf3cdcc12aa8e4b9a0e1655bceae045fdc99afe781ed4e5deffc77adf21d"
	Oct 06 14:27:59 functional-135520 kubelet[1801]: E1006 14:27:59.551182    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:27:59 functional-135520 kubelet[1801]:         container etcd start failed in pod etcd-functional-135520_kube-system(f24ebbe4b3fc964d32e35d345c0d3653): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:59 functional-135520 kubelet[1801]:  > logger="UnhandledError"
	Oct 06 14:27:59 functional-135520 kubelet[1801]: E1006 14:27:59.551233    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-135520" podUID="f24ebbe4b3fc964d32e35d345c0d3653"
	Oct 06 14:27:59 functional-135520 kubelet[1801]: E1006 14:27:59.551396    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:27:59 functional-135520 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:59 functional-135520 kubelet[1801]:  > podSandboxID="a92786c5eb4654629f78c624cdcfef7af25c891888e7f9c4c81b2755c377da1a"
	Oct 06 14:27:59 functional-135520 kubelet[1801]: E1006 14:27:59.551465    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:27:59 functional-135520 kubelet[1801]:         container kube-scheduler start failed in pod kube-scheduler-functional-135520_kube-system(5115bd1eba9594a3f2b99b5d6a4b9d59): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:27:59 functional-135520 kubelet[1801]:  > logger="UnhandledError"
	Oct 06 14:27:59 functional-135520 kubelet[1801]: E1006 14:27:59.552624    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-135520" podUID="5115bd1eba9594a3f2b99b5d6a4b9d59"
	Oct 06 14:28:00 functional-135520 kubelet[1801]: E1006 14:28:00.835444    1801 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-135520.186beca30fea008b\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-135520.186beca30fea008b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-135520,UID:functional-135520,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-135520 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-135520,},FirstTimestamp:2025-10-06 14:17:44.509128843 +0000 UTC m=+0.464938753,LastTimestamp:2025-10-06 14:17:44.510554344 +0000 UTC m=+0.466364247,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-135520,}"
	Oct 06 14:28:02 functional-135520 kubelet[1801]: E1006 14:28:02.515612    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:28:02 functional-135520 kubelet[1801]: E1006 14:28:02.544276    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:28:02 functional-135520 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:28:02 functional-135520 kubelet[1801]:  > podSandboxID="c8563dd0b37e233739b3c3a382aa7aa99838d00dddfb4c17bcee8072fc8b2e15"
	Oct 06 14:28:02 functional-135520 kubelet[1801]: E1006 14:28:02.544398    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:28:02 functional-135520 kubelet[1801]:         container kube-apiserver start failed in pod kube-apiserver-functional-135520_kube-system(64c921c0d544efd1faaa2d85c050bc13): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:28:02 functional-135520 kubelet[1801]:  > logger="UnhandledError"
	Oct 06 14:28:02 functional-135520 kubelet[1801]: E1006 14:28:02.544446    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-135520" podUID="64c921c0d544efd1faaa2d85c050bc13"
	Oct 06 14:28:03 functional-135520 kubelet[1801]: E1006 14:28:03.897666    1801 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 06 14:28:04 functional-135520 kubelet[1801]: E1006 14:28:04.553859    1801 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-135520\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520: exit status 2 (298.014813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-135520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (737s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-135520 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-135520 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (12m15.037370105s)

                                                
                                                
-- stdout --
	* [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-135520" primary control-plane node in "functional-135520" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.528699ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000416419s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000737625s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00070414s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-135520 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 12m15.041808979s for "functional-135520" cluster.
I1006 14:40:21.513855  629719 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-135520
helpers_test.go:243: (dbg) docker inspect functional-135520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	        "Created": "2025-10-06T14:13:32.283355011Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:13:32.318096257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hosts",
	        "LogPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20-json.log",
	        "Name": "/functional-135520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-135520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-135520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	                "LowerDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-135520",
	                "Source": "/var/lib/docker/volumes/functional-135520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-135520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-135520",
	                "name.minikube.sigs.k8s.io": "functional-135520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6368ffca3e5840f94a34614c511d9f0a0a4ca0d05de4fe1f94c8bfdc332f1a62",
	            "SandboxKey": "/var/run/docker/netns/6368ffca3e58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-135520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d1:94:25:38:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f712be59dd18dac98bed5f234c9f77a39e85277143d6f46285adcd3b0185d552",
	                    "EndpointID": "b816964b653b1b5116e3262dfdc87af272931013ef5b9e2714c9ff7357118a6f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-135520",
	                        "3dd9a226ea42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520: exit status 2 (308.614503ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 logs -n 25
helpers_test.go:260: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                            │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                            │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                            │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                               │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                               │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                               │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ delete  │ -p nospam-500584                                                                                              │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ start   │ -p functional-135520 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ start   │ -p functional-135520 --alsologtostderr -v=8                                                                   │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:21 UTC │                     │
	│ cache   │ functional-135520 cache add registry.k8s.io/pause:3.1                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:27 UTC │
	│ cache   │ functional-135520 cache add registry.k8s.io/pause:3.3                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:27 UTC │
	│ cache   │ functional-135520 cache add registry.k8s.io/pause:latest                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:27 UTC │
	│ cache   │ functional-135520 cache add minikube-local-cache-test:functional-135520                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ functional-135520 cache delete minikube-local-cache-test:functional-135520                                    │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl images                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │                     │
	│ cache   │ functional-135520 cache reload                                                                                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ kubectl │ functional-135520 kubectl -- --context functional-135520 get pods                                             │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │                     │
	│ start   │ -p functional-135520 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:28:06
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:28:06.515575  656123 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:28:06.515775  656123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:28:06.515777  656123 out.go:374] Setting ErrFile to fd 2...
	I1006 14:28:06.515780  656123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:28:06.515998  656123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:28:06.516461  656123 out.go:368] Setting JSON to false
	I1006 14:28:06.517416  656123 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18622,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:28:06.517495  656123 start.go:140] virtualization: kvm guest
	I1006 14:28:06.519514  656123 out.go:179] * [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:28:06.520800  656123 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:28:06.520851  656123 notify.go:220] Checking for updates...
	I1006 14:28:06.523025  656123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:28:06.524163  656123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:28:06.525184  656123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:28:06.526184  656123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:28:06.527199  656123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:28:06.528788  656123 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:28:06.528884  656123 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:28:06.553892  656123 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:28:06.554005  656123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:28:06.610913  656123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-06 14:28:06.599957285 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:28:06.611014  656123 docker.go:318] overlay module found
	I1006 14:28:06.612730  656123 out.go:179] * Using the docker driver based on existing profile
	I1006 14:28:06.613792  656123 start.go:304] selected driver: docker
	I1006 14:28:06.613801  656123 start.go:924] validating driver "docker" against &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:28:06.613876  656123 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:28:06.613960  656123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:28:06.672658  656123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-06 14:28:06.663055015 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:28:06.673343  656123 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:28:06.673382  656123 cni.go:84] Creating CNI manager for ""
	I1006 14:28:06.673449  656123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:28:06.673491  656123 start.go:348] cluster config:
	{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:28:06.675542  656123 out.go:179] * Starting "functional-135520" primary control-plane node in "functional-135520" cluster
	I1006 14:28:06.676765  656123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:28:06.678012  656123 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:28:06.679109  656123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:28:06.679148  656123 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:28:06.679171  656123 cache.go:58] Caching tarball of preloaded images
	I1006 14:28:06.679229  656123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:28:06.679315  656123 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:28:06.679322  656123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:28:06.679424  656123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/config.json ...
	I1006 14:28:06.701440  656123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:28:06.701451  656123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:28:06.701470  656123 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:28:06.701500  656123 start.go:360] acquireMachinesLock for functional-135520: {Name:mk634323c4619e77647ac9d9aaca492e399526ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:28:06.701582  656123 start.go:364] duration metric: took 55.883µs to acquireMachinesLock for "functional-135520"
	I1006 14:28:06.701608  656123 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:28:06.701614  656123 fix.go:54] fixHost starting: 
	I1006 14:28:06.701815  656123 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:28:06.719582  656123 fix.go:112] recreateIfNeeded on functional-135520: state=Running err=<nil>
	W1006 14:28:06.719608  656123 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:28:06.721479  656123 out.go:252] * Updating the running docker "functional-135520" container ...
	I1006 14:28:06.721509  656123 machine.go:93] provisionDockerMachine start ...
	I1006 14:28:06.721596  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:06.739776  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:06.740016  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:06.740022  656123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:28:06.883328  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:28:06.883355  656123 ubuntu.go:182] provisioning hostname "functional-135520"
	I1006 14:28:06.883416  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:06.901008  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:06.901274  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:06.901282  656123 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-135520 && echo "functional-135520" | sudo tee /etc/hostname
	I1006 14:28:07.054829  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:28:07.054893  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.073103  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:07.073400  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:07.073412  656123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-135520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-135520/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-135520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:28:07.218044  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:28:07.218064  656123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:28:07.218086  656123 ubuntu.go:190] setting up certificates
	I1006 14:28:07.218097  656123 provision.go:84] configureAuth start
	I1006 14:28:07.218147  656123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:28:07.235320  656123 provision.go:143] copyHostCerts
	I1006 14:28:07.235375  656123 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:28:07.235390  656123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:28:07.235462  656123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:28:07.235557  656123 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:28:07.235561  656123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:28:07.235585  656123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:28:07.235653  656123 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:28:07.235656  656123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:28:07.235685  656123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:28:07.235742  656123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.functional-135520 san=[127.0.0.1 192.168.49.2 functional-135520 localhost minikube]
	I1006 14:28:07.452963  656123 provision.go:177] copyRemoteCerts
	I1006 14:28:07.453021  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:28:07.453058  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.470979  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:07.572166  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:28:07.589268  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 14:28:07.606864  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:28:07.624012  656123 provision.go:87] duration metric: took 405.903097ms to configureAuth
	I1006 14:28:07.624031  656123 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:28:07.624198  656123 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:28:07.624358  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.642129  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:07.642348  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:07.642358  656123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:28:07.930562  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:28:07.930579  656123 machine.go:96] duration metric: took 1.209063221s to provisionDockerMachine
	I1006 14:28:07.930589  656123 start.go:293] postStartSetup for "functional-135520" (driver="docker")
	I1006 14:28:07.930598  656123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:28:07.930651  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:28:07.930687  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.948006  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.049510  656123 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:28:08.053027  656123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:28:08.053042  656123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:28:08.053061  656123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:28:08.053110  656123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:28:08.053177  656123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:28:08.053267  656123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> hosts in /etc/test/nested/copy/629719
	I1006 14:28:08.053298  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/629719
	I1006 14:28:08.060796  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:28:08.077999  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts --> /etc/test/nested/copy/629719/hosts (40 bytes)
	I1006 14:28:08.094766  656123 start.go:296] duration metric: took 164.165544ms for postStartSetup
	I1006 14:28:08.094821  656123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:28:08.094852  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:08.112238  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.210200  656123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:28:08.214744  656123 fix.go:56] duration metric: took 1.513121746s for fixHost
	I1006 14:28:08.214763  656123 start.go:83] releasing machines lock for "functional-135520", held for 1.513172056s
	I1006 14:28:08.214831  656123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:28:08.231996  656123 ssh_runner.go:195] Run: cat /version.json
	I1006 14:28:08.232006  656123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:28:08.232033  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:08.232059  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:08.250015  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.250313  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.415268  656123 ssh_runner.go:195] Run: systemctl --version
	I1006 14:28:08.422068  656123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:28:08.458421  656123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:28:08.463104  656123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:28:08.463164  656123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:28:08.471006  656123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:28:08.471018  656123 start.go:495] detecting cgroup driver to use...
	I1006 14:28:08.471045  656123 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:28:08.471088  656123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:28:08.485271  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:28:08.496859  656123 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:28:08.496895  656123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:28:08.510507  656123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:28:08.522301  656123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:28:08.600902  656123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:28:08.681762  656123 docker.go:234] disabling docker service ...
	I1006 14:28:08.681827  656123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:28:08.696663  656123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:28:08.708614  656123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:28:08.788151  656123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:28:08.872163  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:28:08.884753  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:28:08.898897  656123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:28:08.898940  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.907545  656123 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:28:08.907597  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.916027  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.924428  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.932498  656123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:28:08.939984  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.948324  656123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.956705  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.964969  656123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:28:08.971804  656123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:28:08.978693  656123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:28:09.061389  656123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:28:09.170335  656123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:28:09.170401  656123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:28:09.174497  656123 start.go:563] Will wait 60s for crictl version
	I1006 14:28:09.174546  656123 ssh_runner.go:195] Run: which crictl
	I1006 14:28:09.177947  656123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:28:09.201915  656123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:28:09.201972  656123 ssh_runner.go:195] Run: crio --version
	I1006 14:28:09.230589  656123 ssh_runner.go:195] Run: crio --version
	I1006 14:28:09.260606  656123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:28:09.261947  656123 cli_runner.go:164] Run: docker network inspect functional-135520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:28:09.278672  656123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:28:09.284367  656123 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1006 14:28:09.285382  656123 kubeadm.go:883] updating cluster {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:28:09.285546  656123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:28:09.285603  656123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:28:09.318027  656123 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:28:09.318039  656123 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:28:09.318088  656123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:28:09.342904  656123 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:28:09.342917  656123 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:28:09.342923  656123 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1006 14:28:09.343012  656123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-135520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:28:09.343066  656123 ssh_runner.go:195] Run: crio config
	I1006 14:28:09.388889  656123 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1006 14:28:09.388909  656123 cni.go:84] Creating CNI manager for ""
	I1006 14:28:09.388921  656123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:28:09.388932  656123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:28:09.388955  656123 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-135520 NodeName:functional-135520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:28:09.389087  656123 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-135520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:28:09.389140  656123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:28:09.397400  656123 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:28:09.397454  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:28:09.404846  656123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 14:28:09.416672  656123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:28:09.428910  656123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1006 14:28:09.440961  656123 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:28:09.444714  656123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:28:09.533656  656123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:28:09.546185  656123 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520 for IP: 192.168.49.2
	I1006 14:28:09.546197  656123 certs.go:195] generating shared ca certs ...
	I1006 14:28:09.546290  656123 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:28:09.546440  656123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:28:09.546475  656123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:28:09.546482  656123 certs.go:257] generating profile certs ...
	I1006 14:28:09.546559  656123 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key
	I1006 14:28:09.546594  656123 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key.72a46e8e
	I1006 14:28:09.546623  656123 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key
	I1006 14:28:09.546728  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:28:09.546750  656123 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:28:09.546756  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:28:09.546775  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:28:09.546793  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:28:09.546809  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:28:09.546841  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:28:09.547453  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:28:09.564638  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:28:09.581181  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:28:09.597600  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:28:09.614361  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:28:09.630631  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:28:09.647147  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:28:09.663361  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:28:09.679821  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:28:09.696763  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:28:09.713335  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:28:09.729791  656123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:28:09.741445  656123 ssh_runner.go:195] Run: openssl version
	I1006 14:28:09.747314  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:28:09.755183  656123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:28:09.758724  656123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:28:09.758757  656123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:28:09.792226  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:28:09.799947  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:28:09.808163  656123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:28:09.811711  656123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:28:09.811747  656123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:28:09.845740  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:28:09.854138  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:28:09.862651  656123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:28:09.866319  656123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:28:09.866364  656123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:28:09.900583  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:28:09.908997  656123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:28:09.912812  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:28:09.946819  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:28:09.981139  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:28:10.015748  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:28:10.049705  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:28:10.084715  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:28:10.119782  656123 kubeadm.go:400] StartCluster: {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:28:10.119890  656123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:28:10.119973  656123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:28:10.149719  656123 cri.go:89] found id: ""
	I1006 14:28:10.149774  656123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:28:10.158129  656123 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:28:10.158143  656123 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:28:10.158217  656123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:28:10.166324  656123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.166847  656123 kubeconfig.go:125] found "functional-135520" server: "https://192.168.49.2:8441"
	I1006 14:28:10.168240  656123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:28:10.175929  656123 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-06 14:13:37.047601698 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-06 14:28:09.438461717 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1006 14:28:10.175939  656123 kubeadm.go:1160] stopping kube-system containers ...
	I1006 14:28:10.175953  656123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1006 14:28:10.175996  656123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:28:10.204289  656123 cri.go:89] found id: ""
	I1006 14:28:10.204358  656123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1006 14:28:10.246949  656123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:28:10.255460  656123 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  6 14:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  6 14:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  6 14:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  6 14:17 /etc/kubernetes/scheduler.conf
	
	I1006 14:28:10.255526  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:28:10.263528  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:28:10.271432  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.271482  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:28:10.278844  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:28:10.286462  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.286516  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:28:10.293960  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:28:10.301358  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.301414  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:28:10.308882  656123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:28:10.316879  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:10.360770  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.195064  656123 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.834266287s)
	I1006 14:28:12.195115  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.367120  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.417483  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.470408  656123 api_server.go:52] waiting for apiserver process to appear ...
	I1006 14:28:12.470467  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:12.971496  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:13.471359  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:13.971266  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:14.470628  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:14.970727  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:15.470821  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:15.971537  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:16.470947  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:16.970796  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:17.471324  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:17.970807  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:18.471451  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:18.970803  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:19.471285  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:19.970529  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:20.471499  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:20.971288  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:21.471188  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:21.971466  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:22.471502  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:22.971321  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:23.471284  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:23.970994  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:24.470729  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:24.971445  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:25.470644  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:25.970962  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:26.471442  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:26.971311  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:27.470610  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:27.970961  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:28.470640  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:28.971300  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:29.470626  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:29.971278  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:30.471158  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:30.970980  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:31.470603  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:31.971449  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:32.471177  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:32.970617  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:33.471419  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:33.970722  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:34.471271  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:34.970652  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:35.470921  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:35.971492  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:36.470973  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:36.971256  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:37.471394  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:37.970703  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:38.470961  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:38.970907  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:39.471451  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:39.970850  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:40.471304  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:40.971524  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:41.470744  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:41.971222  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:42.471463  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:42.970604  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:43.470720  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:43.970989  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:44.470818  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:44.970672  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:45.470866  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:45.970683  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:46.471245  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:46.970914  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:47.471423  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:47.971442  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:48.470948  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:48.971501  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:49.471382  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:49.970705  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:50.471271  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:50.971251  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:51.471164  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:51.971336  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:52.471372  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:52.970578  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:53.471263  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:53.971000  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:54.471313  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:54.970838  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:55.470657  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:55.970901  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:56.470732  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:56.971609  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:57.470670  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:57.971054  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:58.470843  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:58.971017  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:59.471644  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:59.970666  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:00.471498  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:00.970805  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:01.471435  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:01.970733  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:02.470885  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:02.970839  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:03.470540  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:03.970872  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:04.470727  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:04.970673  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:05.471322  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:05.970626  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:06.470920  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:06.970887  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:07.471415  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:07.970944  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:08.470610  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:08.971309  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:09.470706  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:09.971450  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:10.471425  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:10.971283  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:11.470937  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:11.970687  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:12.471591  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:12.471676  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:12.498988  656123 cri.go:89] found id: ""
	I1006 14:29:12.499014  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.499021  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:12.499026  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:12.499080  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:12.526057  656123 cri.go:89] found id: ""
	I1006 14:29:12.526074  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.526080  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:12.526085  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:12.526164  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:12.553395  656123 cri.go:89] found id: ""
	I1006 14:29:12.553415  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.553426  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:12.553433  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:12.553486  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:12.580815  656123 cri.go:89] found id: ""
	I1006 14:29:12.580836  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.580846  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:12.580870  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:12.580931  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:12.607516  656123 cri.go:89] found id: ""
	I1006 14:29:12.607533  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.607539  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:12.607544  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:12.607607  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:12.634248  656123 cri.go:89] found id: ""
	I1006 14:29:12.634265  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.634272  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:12.634279  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:12.634335  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:12.660860  656123 cri.go:89] found id: ""
	I1006 14:29:12.660876  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.660883  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:12.660893  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:12.660905  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:12.731400  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:12.731425  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:12.745150  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:12.745174  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:12.803068  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:12.795122    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.795709    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797425    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797887    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.799415    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:12.795122    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.795709    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797425    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797887    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.799415    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:12.803085  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:12.803098  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:12.870066  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:12.870091  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:15.401709  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:15.412675  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:15.412725  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:15.438239  656123 cri.go:89] found id: ""
	I1006 14:29:15.438255  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.438264  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:15.438270  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:15.438322  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:15.463684  656123 cri.go:89] found id: ""
	I1006 14:29:15.463701  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.463709  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:15.463715  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:15.463769  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:15.488259  656123 cri.go:89] found id: ""
	I1006 14:29:15.488276  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.488284  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:15.488289  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:15.488347  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:15.514676  656123 cri.go:89] found id: ""
	I1006 14:29:15.514692  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.514699  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:15.514704  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:15.514762  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:15.540755  656123 cri.go:89] found id: ""
	I1006 14:29:15.540770  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.540776  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:15.540781  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:15.540832  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:15.565570  656123 cri.go:89] found id: ""
	I1006 14:29:15.565588  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.565598  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:15.565604  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:15.565651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:15.591845  656123 cri.go:89] found id: ""
	I1006 14:29:15.591860  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.591876  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:15.591885  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:15.591895  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:15.605051  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:15.605069  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:15.662500  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:15.655240    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.655743    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657283    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657783    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.659338    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:15.655240    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.655743    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657283    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657783    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.659338    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:15.662517  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:15.662531  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:15.727404  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:15.727424  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:15.756261  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:15.756279  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:18.330899  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:18.342312  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:18.342369  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:18.367886  656123 cri.go:89] found id: ""
	I1006 14:29:18.367902  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.367912  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:18.367919  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:18.367967  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:18.394659  656123 cri.go:89] found id: ""
	I1006 14:29:18.394676  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.394685  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:18.394691  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:18.394752  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:18.420739  656123 cri.go:89] found id: ""
	I1006 14:29:18.420762  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.420773  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:18.420780  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:18.420844  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:18.446534  656123 cri.go:89] found id: ""
	I1006 14:29:18.446553  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.446560  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:18.446565  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:18.446610  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:18.474847  656123 cri.go:89] found id: ""
	I1006 14:29:18.474867  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.474876  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:18.474882  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:18.474940  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:18.500739  656123 cri.go:89] found id: ""
	I1006 14:29:18.500755  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.500762  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:18.500767  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:18.500817  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:18.526704  656123 cri.go:89] found id: ""
	I1006 14:29:18.526720  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.526726  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:18.526735  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:18.526749  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:18.594578  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:18.594601  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:18.608090  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:18.608110  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:18.665980  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:18.658366    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.658897    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660516    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660915    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.662586    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:18.658366    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.658897    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660516    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660915    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.662586    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:18.665999  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:18.666015  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:18.726769  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:18.726792  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:21.257561  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:21.269556  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:21.269611  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:21.295967  656123 cri.go:89] found id: ""
	I1006 14:29:21.295989  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.296000  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:21.296007  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:21.296062  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:21.323201  656123 cri.go:89] found id: ""
	I1006 14:29:21.323232  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.323240  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:21.323246  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:21.323297  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:21.352254  656123 cri.go:89] found id: ""
	I1006 14:29:21.352271  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.352277  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:21.352282  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:21.352343  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:21.380457  656123 cri.go:89] found id: ""
	I1006 14:29:21.380477  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.380486  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:21.380493  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:21.380559  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:21.408352  656123 cri.go:89] found id: ""
	I1006 14:29:21.408368  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.408375  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:21.408379  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:21.408435  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:21.434925  656123 cri.go:89] found id: ""
	I1006 14:29:21.434941  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.434948  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:21.434953  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:21.435001  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:21.462533  656123 cri.go:89] found id: ""
	I1006 14:29:21.462551  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.462560  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:21.462570  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:21.462587  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:21.532658  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:21.532682  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:21.547259  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:21.547286  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:21.605779  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:21.598199    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.598802    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600396    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600847    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.602071    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:21.598199    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.598802    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600396    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600847    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.602071    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:21.605799  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:21.605816  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:21.670469  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:21.670493  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:24.203350  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:24.214528  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:24.214576  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:24.241149  656123 cri.go:89] found id: ""
	I1006 14:29:24.241173  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.241182  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:24.241187  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:24.241259  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:24.267072  656123 cri.go:89] found id: ""
	I1006 14:29:24.267089  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.267099  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:24.267104  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:24.267157  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:24.292610  656123 cri.go:89] found id: ""
	I1006 14:29:24.292629  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.292639  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:24.292645  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:24.292694  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:24.318386  656123 cri.go:89] found id: ""
	I1006 14:29:24.318403  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.318409  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:24.318414  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:24.318471  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:24.344804  656123 cri.go:89] found id: ""
	I1006 14:29:24.344827  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.344837  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:24.344843  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:24.344893  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:24.372496  656123 cri.go:89] found id: ""
	I1006 14:29:24.372512  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.372518  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:24.372523  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:24.372569  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:24.397473  656123 cri.go:89] found id: ""
	I1006 14:29:24.397489  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.397495  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:24.397503  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:24.397514  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:24.460002  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:24.460024  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:24.492377  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:24.492394  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:24.558943  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:24.558960  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:24.572667  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:24.572685  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:24.631693  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:24.623841    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.624453    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626057    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626493    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.628013    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:24.623841    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.624453    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626057    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626493    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.628013    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:27.132387  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:27.143350  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:27.143429  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:27.169854  656123 cri.go:89] found id: ""
	I1006 14:29:27.169869  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.169877  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:27.169882  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:27.169930  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:27.196448  656123 cri.go:89] found id: ""
	I1006 14:29:27.196464  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.196471  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:27.196476  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:27.196522  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:27.223046  656123 cri.go:89] found id: ""
	I1006 14:29:27.223066  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.223075  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:27.223081  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:27.223147  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:27.249726  656123 cri.go:89] found id: ""
	I1006 14:29:27.249744  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.249751  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:27.249756  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:27.249810  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:27.277358  656123 cri.go:89] found id: ""
	I1006 14:29:27.277376  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.277391  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:27.277398  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:27.277468  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:27.303432  656123 cri.go:89] found id: ""
	I1006 14:29:27.303452  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.303461  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:27.303467  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:27.303524  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:27.330642  656123 cri.go:89] found id: ""
	I1006 14:29:27.330660  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.330666  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:27.330677  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:27.330692  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:27.360553  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:27.360570  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:27.428526  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:27.428550  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:27.442696  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:27.442720  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:27.500958  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:27.493064    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.493671    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495253    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495769    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.497273    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:27.493064    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.493671    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495253    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495769    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.497273    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:27.500983  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:27.500995  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:30.062974  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:30.074243  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:30.074297  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:30.101939  656123 cri.go:89] found id: ""
	I1006 14:29:30.101960  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.101967  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:30.101973  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:30.102021  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:30.130122  656123 cri.go:89] found id: ""
	I1006 14:29:30.130139  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.130145  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:30.130151  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:30.130229  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:30.157742  656123 cri.go:89] found id: ""
	I1006 14:29:30.157759  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.157767  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:30.157773  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:30.157830  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:30.184613  656123 cri.go:89] found id: ""
	I1006 14:29:30.184634  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.184641  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:30.184646  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:30.184696  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:30.212547  656123 cri.go:89] found id: ""
	I1006 14:29:30.212563  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.212577  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:30.212582  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:30.212631  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:30.240288  656123 cri.go:89] found id: ""
	I1006 14:29:30.240303  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.240310  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:30.240315  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:30.240365  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:30.267014  656123 cri.go:89] found id: ""
	I1006 14:29:30.267030  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.267038  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:30.267047  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:30.267062  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:30.280742  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:30.280768  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:30.340211  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:30.332660    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.333170    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.334689    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.335152    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.336640    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:30.332660    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.333170    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.334689    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.335152    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.336640    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:30.340244  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:30.340259  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:30.401294  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:30.401334  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:30.433250  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:30.433271  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:33.006726  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:33.018059  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:33.018122  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:33.045352  656123 cri.go:89] found id: ""
	I1006 14:29:33.045372  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.045380  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:33.045386  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:33.045436  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:33.072234  656123 cri.go:89] found id: ""
	I1006 14:29:33.072252  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.072260  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:33.072265  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:33.072315  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:33.100162  656123 cri.go:89] found id: ""
	I1006 14:29:33.100178  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.100185  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:33.100190  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:33.100258  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:33.128258  656123 cri.go:89] found id: ""
	I1006 14:29:33.128278  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.128288  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:33.128293  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:33.128342  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:33.155116  656123 cri.go:89] found id: ""
	I1006 14:29:33.155146  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.155153  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:33.155158  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:33.155226  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:33.183135  656123 cri.go:89] found id: ""
	I1006 14:29:33.183150  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.183156  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:33.183161  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:33.183243  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:33.209826  656123 cri.go:89] found id: ""
	I1006 14:29:33.209844  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.209851  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:33.209859  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:33.209870  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:33.276119  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:33.276145  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:33.289780  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:33.289805  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:33.346572  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:33.338882    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.339397    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341034    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341541    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.343088    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:33.338882    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.339397    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341034    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341541    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.343088    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:33.346592  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:33.346605  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:33.413643  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:33.413673  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:35.944641  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:35.955753  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:35.955806  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:35.981909  656123 cri.go:89] found id: ""
	I1006 14:29:35.981923  656123 logs.go:282] 0 containers: []
	W1006 14:29:35.981930  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:35.981935  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:35.981981  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:36.006585  656123 cri.go:89] found id: ""
	I1006 14:29:36.006605  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.006615  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:36.006621  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:36.006687  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:36.034185  656123 cri.go:89] found id: ""
	I1006 14:29:36.034211  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.034221  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:36.034228  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:36.034279  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:36.060600  656123 cri.go:89] found id: ""
	I1006 14:29:36.060618  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.060625  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:36.060630  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:36.060676  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:36.086928  656123 cri.go:89] found id: ""
	I1006 14:29:36.086945  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.086953  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:36.086957  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:36.087073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:36.112833  656123 cri.go:89] found id: ""
	I1006 14:29:36.112851  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.112875  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:36.112882  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:36.112944  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:36.139970  656123 cri.go:89] found id: ""
	I1006 14:29:36.139991  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.140002  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:36.140014  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:36.140030  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:36.153360  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:36.153383  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:36.209902  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:36.202455    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.202929    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.204558    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.205025    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.206599    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:36.202455    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.202929    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.204558    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.205025    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.206599    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:36.209916  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:36.209929  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:36.276242  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:36.276264  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:36.305135  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:36.305152  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:38.872573  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:38.884454  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:38.884512  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:38.911055  656123 cri.go:89] found id: ""
	I1006 14:29:38.911071  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.911076  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:38.911081  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:38.911142  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:38.937413  656123 cri.go:89] found id: ""
	I1006 14:29:38.937433  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.937441  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:38.937450  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:38.937529  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:38.963534  656123 cri.go:89] found id: ""
	I1006 14:29:38.963557  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.963564  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:38.963569  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:38.963619  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:38.989811  656123 cri.go:89] found id: ""
	I1006 14:29:38.989825  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.989831  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:38.989836  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:38.989882  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:39.016789  656123 cri.go:89] found id: ""
	I1006 14:29:39.016809  656123 logs.go:282] 0 containers: []
	W1006 14:29:39.016818  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:39.016824  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:39.016876  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:39.042392  656123 cri.go:89] found id: ""
	I1006 14:29:39.042407  656123 logs.go:282] 0 containers: []
	W1006 14:29:39.042413  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:39.042426  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:39.042473  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:39.068836  656123 cri.go:89] found id: ""
	I1006 14:29:39.068852  656123 logs.go:282] 0 containers: []
	W1006 14:29:39.068859  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:39.068867  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:39.068877  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:39.137663  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:39.137689  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:39.151471  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:39.151495  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:39.209176  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:39.201542    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.202107    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.203710    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.204183    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.205768    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:39.201542    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.202107    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.203710    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.204183    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.205768    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:39.209192  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:39.209218  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:39.274008  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:39.274031  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:41.804322  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:41.815323  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:41.815387  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:41.842055  656123 cri.go:89] found id: ""
	I1006 14:29:41.842070  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.842077  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:41.842082  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:41.842129  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:41.868733  656123 cri.go:89] found id: ""
	I1006 14:29:41.868750  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.868756  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:41.868762  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:41.868809  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:41.896710  656123 cri.go:89] found id: ""
	I1006 14:29:41.896732  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.896742  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:41.896750  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:41.896807  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:41.924854  656123 cri.go:89] found id: ""
	I1006 14:29:41.924875  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.924884  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:41.924891  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:41.924950  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:41.952359  656123 cri.go:89] found id: ""
	I1006 14:29:41.952376  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.952382  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:41.952387  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:41.952453  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:41.979613  656123 cri.go:89] found id: ""
	I1006 14:29:41.979629  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.979636  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:41.979640  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:41.979690  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:42.006904  656123 cri.go:89] found id: ""
	I1006 14:29:42.006923  656123 logs.go:282] 0 containers: []
	W1006 14:29:42.006931  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:42.006941  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:42.006953  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:42.020495  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:42.020518  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:42.078512  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:42.070746    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.071276    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.072881    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.073322    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.074846    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:42.070746    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.071276    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.072881    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.073322    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.074846    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:42.078528  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:42.078543  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:42.143410  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:42.143435  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:42.173024  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:42.173042  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:44.740873  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:44.751791  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:44.751852  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:44.777079  656123 cri.go:89] found id: ""
	I1006 14:29:44.777096  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.777103  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:44.777108  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:44.777158  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:44.802137  656123 cri.go:89] found id: ""
	I1006 14:29:44.802151  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.802158  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:44.802163  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:44.802227  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:44.827942  656123 cri.go:89] found id: ""
	I1006 14:29:44.827957  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.827964  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:44.827970  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:44.828014  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:44.853867  656123 cri.go:89] found id: ""
	I1006 14:29:44.853886  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.853894  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:44.853901  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:44.853956  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:44.879907  656123 cri.go:89] found id: ""
	I1006 14:29:44.879923  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.879931  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:44.879937  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:44.879994  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:44.905634  656123 cri.go:89] found id: ""
	I1006 14:29:44.905654  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.905663  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:44.905673  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:44.905731  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:44.932500  656123 cri.go:89] found id: ""
	I1006 14:29:44.932515  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.932524  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:44.932532  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:44.932543  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:44.960602  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:44.960619  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:45.030445  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:45.030474  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:45.043971  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:45.043991  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:45.101230  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:45.093566    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.094142    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.095685    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.096125    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.097721    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:45.093566    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.094142    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.095685    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.096125    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.097721    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:45.101246  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:45.101259  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:47.666091  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:47.677001  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:47.677061  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:47.703386  656123 cri.go:89] found id: ""
	I1006 14:29:47.703404  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.703412  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:47.703423  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:47.703482  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:47.729961  656123 cri.go:89] found id: ""
	I1006 14:29:47.729978  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.729985  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:47.729998  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:47.730046  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:47.757114  656123 cri.go:89] found id: ""
	I1006 14:29:47.757148  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.757155  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:47.757160  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:47.757220  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:47.783979  656123 cri.go:89] found id: ""
	I1006 14:29:47.783997  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.784004  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:47.784008  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:47.784054  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:47.809265  656123 cri.go:89] found id: ""
	I1006 14:29:47.809280  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.809287  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:47.809292  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:47.809337  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:47.834447  656123 cri.go:89] found id: ""
	I1006 14:29:47.834463  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.834470  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:47.834474  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:47.834518  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:47.860785  656123 cri.go:89] found id: ""
	I1006 14:29:47.860802  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.860808  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:47.860817  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:47.860827  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:47.928576  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:47.928600  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:47.942643  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:47.942669  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:48.000352  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:47.992403    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.992971    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.994566    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.995054    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.996597    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:47.992403    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.992971    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.994566    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.995054    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.996597    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:48.000373  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:48.000391  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:48.065612  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:48.065640  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:50.596504  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:50.607654  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:50.607709  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:50.634723  656123 cri.go:89] found id: ""
	I1006 14:29:50.634742  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.634751  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:50.634758  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:50.634821  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:50.662103  656123 cri.go:89] found id: ""
	I1006 14:29:50.662122  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.662152  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:50.662160  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:50.662232  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:50.688627  656123 cri.go:89] found id: ""
	I1006 14:29:50.688646  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.688653  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:50.688658  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:50.688719  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:50.715511  656123 cri.go:89] found id: ""
	I1006 14:29:50.715530  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.715540  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:50.715544  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:50.715608  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:50.742597  656123 cri.go:89] found id: ""
	I1006 14:29:50.742612  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.742619  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:50.742624  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:50.742671  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:50.769656  656123 cri.go:89] found id: ""
	I1006 14:29:50.769672  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.769679  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:50.769684  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:50.769740  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:50.797585  656123 cri.go:89] found id: ""
	I1006 14:29:50.797603  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.797611  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:50.797620  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:50.797631  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:50.811635  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:50.811664  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:50.870641  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:50.863296    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.863835    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865405    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865832    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.866946    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:50.863296    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.863835    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865405    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865832    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.866946    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:50.870652  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:50.870665  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:50.933617  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:50.933644  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:50.964985  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:50.965003  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:53.535109  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:53.545986  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:53.546039  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:53.571300  656123 cri.go:89] found id: ""
	I1006 14:29:53.571315  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.571322  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:53.571328  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:53.571373  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:53.597111  656123 cri.go:89] found id: ""
	I1006 14:29:53.597126  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.597132  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:53.597137  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:53.597188  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:53.621477  656123 cri.go:89] found id: ""
	I1006 14:29:53.621493  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.621500  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:53.621504  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:53.621550  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:53.647877  656123 cri.go:89] found id: ""
	I1006 14:29:53.647891  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.647898  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:53.647902  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:53.647947  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:53.673269  656123 cri.go:89] found id: ""
	I1006 14:29:53.673284  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.673291  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:53.673296  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:53.673356  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:53.698368  656123 cri.go:89] found id: ""
	I1006 14:29:53.698384  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.698390  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:53.698395  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:53.698446  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:53.724452  656123 cri.go:89] found id: ""
	I1006 14:29:53.724471  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.724481  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:53.724491  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:53.724507  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:53.790937  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:53.790959  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:53.804913  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:53.804929  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:53.862094  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:53.854344    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.854872    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856476    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856953    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.858577    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:53.854344    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.854872    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856476    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856953    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.858577    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:53.862111  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:53.862124  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:53.921847  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:53.921867  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:56.452775  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:56.464702  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:56.464760  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:56.491587  656123 cri.go:89] found id: ""
	I1006 14:29:56.491603  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.491609  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:56.491614  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:56.491662  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:56.517138  656123 cri.go:89] found id: ""
	I1006 14:29:56.517157  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.517166  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:56.517170  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:56.517243  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:56.542713  656123 cri.go:89] found id: ""
	I1006 14:29:56.542728  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.542735  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:56.542740  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:56.542787  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:56.568528  656123 cri.go:89] found id: ""
	I1006 14:29:56.568545  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.568554  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:56.568561  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:56.568616  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:56.593881  656123 cri.go:89] found id: ""
	I1006 14:29:56.593897  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.593904  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:56.593909  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:56.593957  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:56.618843  656123 cri.go:89] found id: ""
	I1006 14:29:56.618862  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.618869  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:56.618874  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:56.618931  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:56.644219  656123 cri.go:89] found id: ""
	I1006 14:29:56.644239  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.644249  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:56.644258  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:56.644270  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:56.701345  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:56.693737    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.694299    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.695864    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.696432    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.697961    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:56.693737    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.694299    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.695864    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.696432    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.697961    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:56.701372  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:56.701384  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:56.762071  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:56.762096  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:56.791634  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:56.791656  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:56.857469  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:56.857492  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:59.371748  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:59.383943  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:59.384004  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:59.411674  656123 cri.go:89] found id: ""
	I1006 14:29:59.411695  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.411703  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:59.411712  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:59.411829  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:59.438177  656123 cri.go:89] found id: ""
	I1006 14:29:59.438193  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.438200  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:59.438217  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:59.438276  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:59.467581  656123 cri.go:89] found id: ""
	I1006 14:29:59.467601  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.467611  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:59.467619  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:59.467682  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:59.496610  656123 cri.go:89] found id: ""
	I1006 14:29:59.496626  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.496633  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:59.496638  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:59.496684  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:59.523799  656123 cri.go:89] found id: ""
	I1006 14:29:59.523815  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.523822  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:59.523827  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:59.523889  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:59.550529  656123 cri.go:89] found id: ""
	I1006 14:29:59.550546  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.550553  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:59.550558  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:59.550606  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:59.577487  656123 cri.go:89] found id: ""
	I1006 14:29:59.577503  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.577509  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:59.577518  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:59.577529  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:59.607238  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:59.607260  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:59.676960  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:59.676986  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:59.690846  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:59.690869  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:59.749311  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:59.741475    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.742053    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.743670    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.744122    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.745515    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:59.741475    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.742053    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.743670    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.744122    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.745515    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:59.749329  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:59.749339  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:02.310264  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:02.321519  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:02.321570  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:02.347821  656123 cri.go:89] found id: ""
	I1006 14:30:02.347842  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.347852  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:02.347860  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:02.347920  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:02.373381  656123 cri.go:89] found id: ""
	I1006 14:30:02.373404  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.373412  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:02.373418  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:02.373462  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:02.401169  656123 cri.go:89] found id: ""
	I1006 14:30:02.401189  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.401199  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:02.401215  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:02.401271  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:02.427774  656123 cri.go:89] found id: ""
	I1006 14:30:02.427790  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.427799  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:02.427806  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:02.427858  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:02.453624  656123 cri.go:89] found id: ""
	I1006 14:30:02.453642  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.453652  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:02.453659  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:02.453725  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:02.480503  656123 cri.go:89] found id: ""
	I1006 14:30:02.480520  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.480526  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:02.480531  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:02.480581  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:02.506624  656123 cri.go:89] found id: ""
	I1006 14:30:02.506643  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.506652  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:02.506662  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:02.506675  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:02.575030  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:02.575055  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:02.589240  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:02.589266  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:02.647840  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:02.640193    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.640759    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642327    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642757    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.644424    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:02.640193    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.640759    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642327    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642757    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.644424    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:02.647855  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:02.647866  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:02.710907  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:02.710932  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:05.243556  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:05.254230  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:05.254287  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:05.279490  656123 cri.go:89] found id: ""
	I1006 14:30:05.279506  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.279514  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:05.279520  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:05.279572  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:05.305513  656123 cri.go:89] found id: ""
	I1006 14:30:05.305533  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.305539  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:05.305544  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:05.305591  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:05.331962  656123 cri.go:89] found id: ""
	I1006 14:30:05.331981  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.331990  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:05.331996  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:05.332058  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:05.357789  656123 cri.go:89] found id: ""
	I1006 14:30:05.357807  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.357815  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:05.357820  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:05.357866  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:05.383637  656123 cri.go:89] found id: ""
	I1006 14:30:05.383658  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.383664  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:05.383669  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:05.383715  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:05.408314  656123 cri.go:89] found id: ""
	I1006 14:30:05.408332  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.408341  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:05.408348  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:05.408418  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:05.433843  656123 cri.go:89] found id: ""
	I1006 14:30:05.433861  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.433867  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:05.433876  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:05.433888  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:05.494147  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:05.494176  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:05.523997  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:05.524016  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:05.591019  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:05.591039  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:05.604531  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:05.604546  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:05.660873  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:05.653677    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.654169    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.655684    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.656053    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.657599    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:05.653677    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.654169    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.655684    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.656053    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.657599    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:08.162635  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:08.173492  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:08.173538  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:08.199879  656123 cri.go:89] found id: ""
	I1006 14:30:08.199896  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.199902  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:08.199907  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:08.199954  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:08.225501  656123 cri.go:89] found id: ""
	I1006 14:30:08.225520  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.225531  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:08.225537  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:08.225598  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:08.251711  656123 cri.go:89] found id: ""
	I1006 14:30:08.251730  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.251737  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:08.251742  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:08.251790  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:08.277559  656123 cri.go:89] found id: ""
	I1006 14:30:08.277575  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.277584  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:08.277594  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:08.277656  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:08.303749  656123 cri.go:89] found id: ""
	I1006 14:30:08.303767  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.303776  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:08.303781  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:08.303830  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:08.329034  656123 cri.go:89] found id: ""
	I1006 14:30:08.329053  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.329059  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:08.329064  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:08.329111  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:08.354393  656123 cri.go:89] found id: ""
	I1006 14:30:08.354409  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.354416  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:08.354423  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:08.354434  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:08.416780  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:08.416799  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:08.444904  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:08.444925  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:08.518089  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:08.518111  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:08.531108  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:08.531124  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:08.586529  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:08.578762    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.579607    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581199    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581663    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.583179    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:08.578762    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.579607    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581199    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581663    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.583179    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:11.087318  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:11.098631  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:11.098701  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:11.125423  656123 cri.go:89] found id: ""
	I1006 14:30:11.125441  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.125450  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:11.125456  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:11.125520  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:11.154785  656123 cri.go:89] found id: ""
	I1006 14:30:11.154803  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.154810  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:11.154815  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:11.154868  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:11.180879  656123 cri.go:89] found id: ""
	I1006 14:30:11.180899  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.180908  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:11.180915  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:11.180979  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:11.207281  656123 cri.go:89] found id: ""
	I1006 14:30:11.207308  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.207318  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:11.207326  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:11.207391  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:11.234275  656123 cri.go:89] found id: ""
	I1006 14:30:11.234293  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.234302  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:11.234308  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:11.234379  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:11.261486  656123 cri.go:89] found id: ""
	I1006 14:30:11.261502  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.261508  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:11.261514  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:11.261561  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:11.287155  656123 cri.go:89] found id: ""
	I1006 14:30:11.287173  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.287180  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:11.287189  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:11.287223  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:11.358359  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:11.358383  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:11.372359  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:11.372385  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:11.430998  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:11.423269    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.423805    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425394    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425911    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.427479    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:11.423269    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.423805    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425394    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425911    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.427479    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:11.431012  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:11.431023  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:11.498514  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:11.498538  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:14.030847  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:14.041715  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:14.041763  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:14.067907  656123 cri.go:89] found id: ""
	I1006 14:30:14.067927  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.067938  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:14.067944  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:14.067992  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:14.093781  656123 cri.go:89] found id: ""
	I1006 14:30:14.093800  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.093810  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:14.093817  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:14.093873  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:14.120737  656123 cri.go:89] found id: ""
	I1006 14:30:14.120752  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.120759  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:14.120765  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:14.120825  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:14.148551  656123 cri.go:89] found id: ""
	I1006 14:30:14.148567  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.148575  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:14.148580  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:14.148632  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:14.174943  656123 cri.go:89] found id: ""
	I1006 14:30:14.174960  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.174965  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:14.174970  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:14.175032  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:14.201148  656123 cri.go:89] found id: ""
	I1006 14:30:14.201163  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.201172  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:14.201178  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:14.201245  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:14.228046  656123 cri.go:89] found id: ""
	I1006 14:30:14.228062  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.228068  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:14.228077  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:14.228087  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:14.300889  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:14.300914  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:14.314304  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:14.314326  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:14.370818  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:14.363282    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.363836    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365383    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365793    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.367329    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:14.363282    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.363836    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365383    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365793    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.367329    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:14.370827  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:14.370838  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:14.431681  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:14.431704  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:16.961397  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:16.973165  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:16.973247  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:17.001273  656123 cri.go:89] found id: ""
	I1006 14:30:17.001291  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.001297  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:17.001302  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:17.001354  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:17.027536  656123 cri.go:89] found id: ""
	I1006 14:30:17.027557  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.027565  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:17.027570  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:17.027622  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:17.054924  656123 cri.go:89] found id: ""
	I1006 14:30:17.054940  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.054947  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:17.054953  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:17.055000  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:17.083443  656123 cri.go:89] found id: ""
	I1006 14:30:17.083460  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.083467  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:17.083472  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:17.083522  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:17.111442  656123 cri.go:89] found id: ""
	I1006 14:30:17.111459  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.111467  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:17.111474  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:17.111530  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:17.138310  656123 cri.go:89] found id: ""
	I1006 14:30:17.138329  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.138338  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:17.138344  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:17.138393  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:17.166360  656123 cri.go:89] found id: ""
	I1006 14:30:17.166389  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.166400  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:17.166411  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:17.166427  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:17.238488  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:17.238516  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:17.252654  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:17.252688  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:17.312602  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:17.304484    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.305059    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.306672    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.307166    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.308768    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:17.304484    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.305059    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.306672    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.307166    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.308768    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:17.312623  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:17.312634  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:17.375185  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:17.375222  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:19.907611  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:19.918724  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:19.918776  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:19.945244  656123 cri.go:89] found id: ""
	I1006 14:30:19.945264  656123 logs.go:282] 0 containers: []
	W1006 14:30:19.945277  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:19.945285  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:19.945343  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:19.972919  656123 cri.go:89] found id: ""
	I1006 14:30:19.972939  656123 logs.go:282] 0 containers: []
	W1006 14:30:19.972949  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:19.972955  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:19.973008  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:19.999841  656123 cri.go:89] found id: ""
	I1006 14:30:19.999858  656123 logs.go:282] 0 containers: []
	W1006 14:30:19.999864  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:19.999870  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:19.999926  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:20.027271  656123 cri.go:89] found id: ""
	I1006 14:30:20.027290  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.027299  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:20.027306  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:20.027364  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:20.054297  656123 cri.go:89] found id: ""
	I1006 14:30:20.054313  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.054320  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:20.054325  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:20.054380  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:20.081354  656123 cri.go:89] found id: ""
	I1006 14:30:20.081374  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.081380  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:20.081386  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:20.081438  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:20.108256  656123 cri.go:89] found id: ""
	I1006 14:30:20.108273  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.108280  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:20.108289  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:20.108303  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:20.177476  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:20.177501  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:20.191396  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:20.191419  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:20.250424  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:20.242535    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.243129    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.244697    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.245110    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.246705    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:20.242535    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.243129    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.244697    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.245110    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.246705    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:20.250437  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:20.250448  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:20.311404  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:20.311430  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:22.842482  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:22.854386  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:22.854451  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:22.882144  656123 cri.go:89] found id: ""
	I1006 14:30:22.882160  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.882167  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:22.882176  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:22.882244  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:22.908078  656123 cri.go:89] found id: ""
	I1006 14:30:22.908097  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.908106  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:22.908112  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:22.908163  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:22.934596  656123 cri.go:89] found id: ""
	I1006 14:30:22.934613  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.934620  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:22.934624  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:22.934673  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:22.961803  656123 cri.go:89] found id: ""
	I1006 14:30:22.961821  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.961830  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:22.961837  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:22.961889  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:22.988277  656123 cri.go:89] found id: ""
	I1006 14:30:22.988293  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.988300  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:22.988305  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:22.988355  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:23.015411  656123 cri.go:89] found id: ""
	I1006 14:30:23.015428  656123 logs.go:282] 0 containers: []
	W1006 14:30:23.015436  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:23.015441  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:23.015494  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:23.042508  656123 cri.go:89] found id: ""
	I1006 14:30:23.042526  656123 logs.go:282] 0 containers: []
	W1006 14:30:23.042534  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:23.042545  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:23.042558  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:23.110932  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:23.110957  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:23.125294  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:23.125322  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:23.185388  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:23.177268    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.177825    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179508    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179961    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.181496    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:23.177268    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.177825    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179508    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179961    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.181496    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:23.185405  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:23.185418  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:23.246673  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:23.246696  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:25.778383  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:25.789490  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:25.789539  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:25.816713  656123 cri.go:89] found id: ""
	I1006 14:30:25.816731  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.816737  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:25.816742  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:25.816792  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:25.844676  656123 cri.go:89] found id: ""
	I1006 14:30:25.844699  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.844708  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:25.844716  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:25.844784  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:25.872027  656123 cri.go:89] found id: ""
	I1006 14:30:25.872046  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.872054  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:25.872059  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:25.872115  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:25.898454  656123 cri.go:89] found id: ""
	I1006 14:30:25.898473  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.898480  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:25.898486  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:25.898548  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:25.926559  656123 cri.go:89] found id: ""
	I1006 14:30:25.926576  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.926583  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:25.926589  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:25.926638  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:25.953516  656123 cri.go:89] found id: ""
	I1006 14:30:25.953535  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.953544  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:25.953562  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:25.953634  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:25.980962  656123 cri.go:89] found id: ""
	I1006 14:30:25.980978  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.980986  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:25.980994  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:25.981012  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:26.052486  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:26.052510  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:26.066688  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:26.066710  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:26.126899  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:26.118941    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.119633    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121265    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121767    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.123331    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:26.118941    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.119633    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121265    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121767    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.123331    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:26.126912  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:26.126924  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:26.187018  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:26.187047  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:28.721028  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:28.732295  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:28.732361  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:28.759561  656123 cri.go:89] found id: ""
	I1006 14:30:28.759583  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.759592  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:28.759598  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:28.759651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:28.787553  656123 cri.go:89] found id: ""
	I1006 14:30:28.787573  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.787584  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:28.787598  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:28.787653  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:28.813499  656123 cri.go:89] found id: ""
	I1006 14:30:28.813520  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.813529  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:28.813535  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:28.813591  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:28.840441  656123 cri.go:89] found id: ""
	I1006 14:30:28.840462  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.840468  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:28.840474  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:28.840523  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:28.867632  656123 cri.go:89] found id: ""
	I1006 14:30:28.867647  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.867654  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:28.867659  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:28.867709  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:28.895005  656123 cri.go:89] found id: ""
	I1006 14:30:28.895023  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.895029  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:28.895034  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:28.895082  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:28.920965  656123 cri.go:89] found id: ""
	I1006 14:30:28.920983  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.920993  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:28.921003  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:28.921017  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:28.981278  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:28.981302  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:29.010983  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:29.011000  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:29.078541  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:29.078565  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:29.092586  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:29.092613  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:29.151129  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:29.143937    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.144542    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146112    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146650    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.147708    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:29.143937    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.144542    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146112    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146650    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.147708    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:31.652214  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:31.663823  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:31.663891  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:31.690576  656123 cri.go:89] found id: ""
	I1006 14:30:31.690596  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.690606  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:31.690613  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:31.690666  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:31.716874  656123 cri.go:89] found id: ""
	I1006 14:30:31.716894  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.716902  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:31.716907  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:31.716956  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:31.744572  656123 cri.go:89] found id: ""
	I1006 14:30:31.744594  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.744603  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:31.744611  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:31.744681  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:31.771539  656123 cri.go:89] found id: ""
	I1006 14:30:31.771556  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.771565  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:31.771575  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:31.771637  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:31.798102  656123 cri.go:89] found id: ""
	I1006 14:30:31.798118  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.798125  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:31.798131  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:31.798175  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:31.825905  656123 cri.go:89] found id: ""
	I1006 14:30:31.825921  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.825928  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:31.825933  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:31.825985  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:31.853474  656123 cri.go:89] found id: ""
	I1006 14:30:31.853489  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.853496  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:31.853504  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:31.853515  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:31.925541  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:31.925566  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:31.939650  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:31.939676  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:31.998586  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:31.990853   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.991461   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.992961   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.993424   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.994933   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:31.990853   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.991461   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.992961   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.993424   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.994933   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:31.998595  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:31.998606  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:32.058322  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:32.058348  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:34.591129  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:34.602495  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:34.602545  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:34.628973  656123 cri.go:89] found id: ""
	I1006 14:30:34.628991  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.628998  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:34.629003  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:34.629048  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:34.654917  656123 cri.go:89] found id: ""
	I1006 14:30:34.654934  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.654941  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:34.654945  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:34.654997  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:34.680385  656123 cri.go:89] found id: ""
	I1006 14:30:34.680401  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.680408  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:34.680413  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:34.680459  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:34.705914  656123 cri.go:89] found id: ""
	I1006 14:30:34.705929  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.705935  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:34.705940  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:34.705989  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:34.731580  656123 cri.go:89] found id: ""
	I1006 14:30:34.731597  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.731604  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:34.731609  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:34.731661  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:34.756200  656123 cri.go:89] found id: ""
	I1006 14:30:34.756232  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.756239  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:34.756244  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:34.756293  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:34.781770  656123 cri.go:89] found id: ""
	I1006 14:30:34.781785  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.781794  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:34.781802  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:34.781813  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:34.850861  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:34.850884  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:34.864688  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:34.864706  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:34.921713  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:34.914358   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.914917   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916495   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916918   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.918459   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:34.914358   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.914917   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916495   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916918   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.918459   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:34.921723  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:34.921733  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:34.985884  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:34.985906  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:37.516053  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:37.526705  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:37.526751  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:37.551472  656123 cri.go:89] found id: ""
	I1006 14:30:37.551490  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.551500  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:37.551507  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:37.551561  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:37.576603  656123 cri.go:89] found id: ""
	I1006 14:30:37.576619  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.576626  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:37.576630  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:37.576674  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:37.602217  656123 cri.go:89] found id: ""
	I1006 14:30:37.602241  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.602250  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:37.602254  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:37.602300  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:37.627547  656123 cri.go:89] found id: ""
	I1006 14:30:37.627561  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.627567  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:37.627572  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:37.627614  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:37.652434  656123 cri.go:89] found id: ""
	I1006 14:30:37.652451  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.652460  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:37.652467  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:37.652519  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:37.677543  656123 cri.go:89] found id: ""
	I1006 14:30:37.677558  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.677564  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:37.677569  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:37.677611  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:37.701695  656123 cri.go:89] found id: ""
	I1006 14:30:37.701711  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.701718  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:37.701727  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:37.701737  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:37.730832  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:37.730852  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:37.799686  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:37.799708  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:37.813081  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:37.813106  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:37.869274  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:37.861812   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.862406   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.863958   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.864398   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.865877   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:37.861812   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.862406   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.863958   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.864398   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.865877   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:37.869285  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:37.869297  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:40.432488  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:40.443779  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:40.443830  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:40.471502  656123 cri.go:89] found id: ""
	I1006 14:30:40.471520  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.471528  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:40.471533  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:40.471591  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:40.498418  656123 cri.go:89] found id: ""
	I1006 14:30:40.498435  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.498442  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:40.498447  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:40.498495  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:40.525987  656123 cri.go:89] found id: ""
	I1006 14:30:40.526003  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.526009  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:40.526015  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:40.526073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:40.554161  656123 cri.go:89] found id: ""
	I1006 14:30:40.554180  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.554190  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:40.554197  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:40.554262  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:40.581168  656123 cri.go:89] found id: ""
	I1006 14:30:40.581186  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.581193  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:40.581198  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:40.581272  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:40.608862  656123 cri.go:89] found id: ""
	I1006 14:30:40.608879  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.608890  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:40.608899  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:40.608951  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:40.636053  656123 cri.go:89] found id: ""
	I1006 14:30:40.636069  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.636076  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:40.636084  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:40.636096  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:40.649832  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:40.649854  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:40.708143  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:40.700302   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.700800   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702328   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702794   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.704437   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:40.700302   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.700800   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702328   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702794   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.704437   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:40.708157  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:40.708173  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:40.767571  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:40.767598  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:40.798425  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:40.798447  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:43.369172  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:43.380275  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:43.380336  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:43.407137  656123 cri.go:89] found id: ""
	I1006 14:30:43.407166  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.407172  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:43.407178  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:43.407255  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:43.434264  656123 cri.go:89] found id: ""
	I1006 14:30:43.434280  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.434286  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:43.434291  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:43.434344  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:43.460492  656123 cri.go:89] found id: ""
	I1006 14:30:43.460511  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.460521  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:43.460527  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:43.460579  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:43.486096  656123 cri.go:89] found id: ""
	I1006 14:30:43.486112  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.486118  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:43.486123  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:43.486180  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:43.512166  656123 cri.go:89] found id: ""
	I1006 14:30:43.512182  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.512189  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:43.512200  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:43.512274  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:43.540182  656123 cri.go:89] found id: ""
	I1006 14:30:43.540198  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.540225  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:43.540231  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:43.540281  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:43.566257  656123 cri.go:89] found id: ""
	I1006 14:30:43.566276  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.566283  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:43.566291  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:43.566301  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:43.633282  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:43.633308  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:43.646525  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:43.646547  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:43.703245  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:43.695412   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.695958   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.697564   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.698089   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.699634   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:43.695412   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.695958   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.697564   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.698089   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.699634   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:43.703258  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:43.703271  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:43.763009  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:43.763030  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:46.294610  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:46.306608  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:46.306657  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:46.333990  656123 cri.go:89] found id: ""
	I1006 14:30:46.334010  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.334017  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:46.334023  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:46.334071  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:46.360169  656123 cri.go:89] found id: ""
	I1006 14:30:46.360186  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.360193  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:46.360197  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:46.360274  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:46.386526  656123 cri.go:89] found id: ""
	I1006 14:30:46.386543  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.386552  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:46.386559  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:46.386618  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:46.412732  656123 cri.go:89] found id: ""
	I1006 14:30:46.412755  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.412761  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:46.412768  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:46.412819  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:46.437943  656123 cri.go:89] found id: ""
	I1006 14:30:46.437961  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.437969  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:46.437975  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:46.438022  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:46.462227  656123 cri.go:89] found id: ""
	I1006 14:30:46.462245  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.462254  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:46.462259  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:46.462308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:46.486426  656123 cri.go:89] found id: ""
	I1006 14:30:46.486446  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.486455  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:46.486465  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:46.486478  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:46.555804  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:46.555824  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:46.568953  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:46.568977  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:46.625518  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:46.616895   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618433   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618998   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.620647   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.621154   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:46.616895   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618433   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618998   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.620647   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.621154   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:46.625532  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:46.625542  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:46.689026  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:46.689045  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:49.220452  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:49.231376  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:49.231437  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:49.257464  656123 cri.go:89] found id: ""
	I1006 14:30:49.257484  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.257492  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:49.257499  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:49.257549  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:49.282291  656123 cri.go:89] found id: ""
	I1006 14:30:49.282305  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.282315  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:49.282322  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:49.282374  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:49.307787  656123 cri.go:89] found id: ""
	I1006 14:30:49.307806  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.307815  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:49.307821  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:49.307872  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:49.333154  656123 cri.go:89] found id: ""
	I1006 14:30:49.333172  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.333179  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:49.333185  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:49.333252  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:49.359161  656123 cri.go:89] found id: ""
	I1006 14:30:49.359175  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.359183  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:49.359188  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:49.359253  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:49.385380  656123 cri.go:89] found id: ""
	I1006 14:30:49.385398  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.385405  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:49.385410  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:49.385461  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:49.409982  656123 cri.go:89] found id: ""
	I1006 14:30:49.410009  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.410020  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:49.410030  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:49.410043  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:49.470637  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:49.470662  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:49.498568  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:49.498585  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:49.568338  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:49.568355  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:49.581842  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:49.581863  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:49.638518  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:49.631016   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.631575   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633164   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633595   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.635088   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:49.631016   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.631575   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633164   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633595   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.635088   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:52.139121  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:52.151341  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:52.151400  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:52.180909  656123 cri.go:89] found id: ""
	I1006 14:30:52.180929  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.180937  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:52.180943  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:52.181004  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:52.212664  656123 cri.go:89] found id: ""
	I1006 14:30:52.212687  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.212695  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:52.212700  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:52.212753  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:52.242804  656123 cri.go:89] found id: ""
	I1006 14:30:52.242824  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.242833  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:52.242840  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:52.242906  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:52.275408  656123 cri.go:89] found id: ""
	I1006 14:30:52.275428  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.275437  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:52.275443  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:52.275511  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:52.304772  656123 cri.go:89] found id: ""
	I1006 14:30:52.304791  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.304797  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:52.304802  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:52.304855  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:52.334628  656123 cri.go:89] found id: ""
	I1006 14:30:52.334646  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.334665  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:52.334672  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:52.334744  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:52.363535  656123 cri.go:89] found id: ""
	I1006 14:30:52.363551  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.363558  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:52.363567  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:52.363578  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:52.395148  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:52.395172  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:52.467790  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:52.467818  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:52.483589  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:52.483613  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:52.547153  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:52.538900   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.539522   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541194   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541724   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.543496   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:52.538900   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.539522   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541194   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541724   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.543496   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:52.547168  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:52.547191  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:55.111539  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:55.123376  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:55.123432  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:55.151263  656123 cri.go:89] found id: ""
	I1006 14:30:55.151278  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.151285  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:55.151289  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:55.151354  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:55.179099  656123 cri.go:89] found id: ""
	I1006 14:30:55.179116  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.179123  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:55.179127  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:55.179177  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:55.207568  656123 cri.go:89] found id: ""
	I1006 14:30:55.207586  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.207594  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:55.207599  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:55.207653  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:55.236037  656123 cri.go:89] found id: ""
	I1006 14:30:55.236058  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.236068  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:55.236075  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:55.236132  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:55.263286  656123 cri.go:89] found id: ""
	I1006 14:30:55.263304  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.263311  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:55.263316  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:55.263416  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:55.291167  656123 cri.go:89] found id: ""
	I1006 14:30:55.291189  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.291197  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:55.291217  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:55.291271  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:55.318410  656123 cri.go:89] found id: ""
	I1006 14:30:55.318430  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.318440  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:55.318450  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:55.318461  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:55.385160  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:55.385187  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:55.399050  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:55.399076  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:55.458418  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:55.450518   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.451123   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.452726   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.453351   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.454908   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:55.450518   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.451123   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.452726   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.453351   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.454908   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:55.458432  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:55.458448  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:55.524792  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:55.524816  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:58.057888  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:58.068966  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:58.069020  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:58.096398  656123 cri.go:89] found id: ""
	I1006 14:30:58.096415  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.096423  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:58.096428  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:58.096477  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:58.123183  656123 cri.go:89] found id: ""
	I1006 14:30:58.123199  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.123218  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:58.123225  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:58.123278  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:58.149129  656123 cri.go:89] found id: ""
	I1006 14:30:58.149145  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.149152  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:58.149156  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:58.149231  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:58.176154  656123 cri.go:89] found id: ""
	I1006 14:30:58.176171  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.176178  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:58.176183  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:58.176260  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:58.202224  656123 cri.go:89] found id: ""
	I1006 14:30:58.202244  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.202252  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:58.202257  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:58.202308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:58.228701  656123 cri.go:89] found id: ""
	I1006 14:30:58.228722  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.228731  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:58.228738  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:58.228789  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:58.255405  656123 cri.go:89] found id: ""
	I1006 14:30:58.255424  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.255434  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:58.255445  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:58.255463  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:58.326378  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:58.326403  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:58.340088  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:58.340113  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:58.398424  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:58.390470   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.391705   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.392182   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.393789   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.394272   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:58.390470   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.391705   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.392182   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.393789   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.394272   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:58.398434  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:58.398444  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:58.458532  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:58.458557  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:00.988890  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:01.000117  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:01.000187  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:01.027975  656123 cri.go:89] found id: ""
	I1006 14:31:01.027994  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.028005  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:01.028011  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:01.028073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:01.057671  656123 cri.go:89] found id: ""
	I1006 14:31:01.057689  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.057695  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:01.057703  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:01.057753  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:01.086296  656123 cri.go:89] found id: ""
	I1006 14:31:01.086312  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.086319  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:01.086324  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:01.086380  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:01.115804  656123 cri.go:89] found id: ""
	I1006 14:31:01.115828  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.115838  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:01.115846  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:01.115914  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:01.143626  656123 cri.go:89] found id: ""
	I1006 14:31:01.143652  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.143662  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:01.143669  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:01.143730  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:01.173329  656123 cri.go:89] found id: ""
	I1006 14:31:01.173351  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.173358  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:01.173363  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:01.173425  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:01.202447  656123 cri.go:89] found id: ""
	I1006 14:31:01.202464  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.202472  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:01.202481  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:01.202493  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:01.264676  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:01.255680   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.256306   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.258878   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.259545   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.261098   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:01.255680   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.256306   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.258878   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.259545   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.261098   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:01.264688  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:01.264701  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:01.325726  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:01.325755  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:01.357935  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:01.357956  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:01.426320  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:01.426346  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:03.942695  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:03.954165  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:03.954257  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:03.982933  656123 cri.go:89] found id: ""
	I1006 14:31:03.982952  656123 logs.go:282] 0 containers: []
	W1006 14:31:03.982960  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:03.982966  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:03.983023  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:04.010750  656123 cri.go:89] found id: ""
	I1006 14:31:04.010768  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.010775  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:04.010780  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:04.010845  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:04.038408  656123 cri.go:89] found id: ""
	I1006 14:31:04.038430  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.038440  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:04.038446  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:04.038506  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:04.065987  656123 cri.go:89] found id: ""
	I1006 14:31:04.066004  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.066011  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:04.066017  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:04.066064  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:04.092615  656123 cri.go:89] found id: ""
	I1006 14:31:04.092635  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.092645  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:04.092651  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:04.092715  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:04.120296  656123 cri.go:89] found id: ""
	I1006 14:31:04.120314  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.120324  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:04.120331  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:04.120392  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:04.148258  656123 cri.go:89] found id: ""
	I1006 14:31:04.148275  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.148282  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:04.148291  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:04.148303  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:04.162693  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:04.162716  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:04.222565  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:04.214872   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.215499   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.216999   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.217486   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.218767   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:04.214872   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.215499   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.216999   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.217486   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.218767   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:04.222576  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:04.222588  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:04.284619  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:04.284645  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:04.315049  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:04.315067  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:06.880125  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:06.891035  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:06.891100  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:06.919022  656123 cri.go:89] found id: ""
	I1006 14:31:06.919039  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.919054  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:06.919059  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:06.919109  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:06.945007  656123 cri.go:89] found id: ""
	I1006 14:31:06.945023  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.945030  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:06.945035  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:06.945082  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:06.971114  656123 cri.go:89] found id: ""
	I1006 14:31:06.971140  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.971150  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:06.971156  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:06.971219  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:06.997325  656123 cri.go:89] found id: ""
	I1006 14:31:06.997341  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.997349  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:06.997354  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:06.997399  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:07.024483  656123 cri.go:89] found id: ""
	I1006 14:31:07.024503  656123 logs.go:282] 0 containers: []
	W1006 14:31:07.024510  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:07.024515  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:07.024563  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:07.050897  656123 cri.go:89] found id: ""
	I1006 14:31:07.050916  656123 logs.go:282] 0 containers: []
	W1006 14:31:07.050924  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:07.050929  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:07.050988  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:07.076681  656123 cri.go:89] found id: ""
	I1006 14:31:07.076698  656123 logs.go:282] 0 containers: []
	W1006 14:31:07.076706  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:07.076716  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:07.076730  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:07.137015  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:07.137039  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:07.167691  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:07.167711  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:07.236752  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:07.236774  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:07.250497  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:07.250519  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:07.307410  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:07.299651   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.300252   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.301817   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.302267   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.303782   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:07.299651   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.300252   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.301817   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.302267   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.303782   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:09.809076  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:09.819941  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:09.819991  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:09.847047  656123 cri.go:89] found id: ""
	I1006 14:31:09.847066  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.847075  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:09.847082  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:09.847151  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:09.873840  656123 cri.go:89] found id: ""
	I1006 14:31:09.873856  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.873862  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:09.873867  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:09.873923  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:09.900892  656123 cri.go:89] found id: ""
	I1006 14:31:09.900908  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.900914  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:09.900920  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:09.900967  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:09.927801  656123 cri.go:89] found id: ""
	I1006 14:31:09.927822  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.927835  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:09.927842  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:09.927892  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:09.955400  656123 cri.go:89] found id: ""
	I1006 14:31:09.955420  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.955428  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:09.955433  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:09.955484  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:09.981624  656123 cri.go:89] found id: ""
	I1006 14:31:09.981640  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.981647  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:09.981653  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:09.981700  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:10.009693  656123 cri.go:89] found id: ""
	I1006 14:31:10.009710  656123 logs.go:282] 0 containers: []
	W1006 14:31:10.009716  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:10.009724  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:10.009735  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:10.075460  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:10.075492  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:10.089300  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:10.089327  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:10.148123  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:10.140282   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.140860   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142433   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142866   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.144460   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:10.140282   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.140860   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142433   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142866   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.144460   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:10.148152  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:10.148165  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:10.210442  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:10.210473  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:12.742692  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:12.754226  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:12.754289  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:12.783228  656123 cri.go:89] found id: ""
	I1006 14:31:12.783249  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.783256  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:12.783263  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:12.783324  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:12.811693  656123 cri.go:89] found id: ""
	I1006 14:31:12.811715  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.811725  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:12.811732  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:12.811782  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:12.840310  656123 cri.go:89] found id: ""
	I1006 14:31:12.840332  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.840342  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:12.840348  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:12.840402  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:12.869101  656123 cri.go:89] found id: ""
	I1006 14:31:12.869123  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.869131  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:12.869137  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:12.869189  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:12.897605  656123 cri.go:89] found id: ""
	I1006 14:31:12.897623  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.897630  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:12.897635  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:12.897693  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:12.926227  656123 cri.go:89] found id: ""
	I1006 14:31:12.926247  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.926254  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:12.926260  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:12.926308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:12.955298  656123 cri.go:89] found id: ""
	I1006 14:31:12.955315  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.955324  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:12.955334  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:12.955348  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:13.021936  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:13.021962  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:13.036093  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:13.036115  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:13.096234  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:13.088298   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.088908   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090517   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090973   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.092543   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:13.088298   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.088908   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090517   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090973   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.092543   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:13.096246  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:13.096258  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:13.156934  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:13.156960  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:15.689959  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:15.701228  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:15.701301  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:15.727030  656123 cri.go:89] found id: ""
	I1006 14:31:15.727050  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.727059  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:15.727067  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:15.727119  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:15.753392  656123 cri.go:89] found id: ""
	I1006 14:31:15.753409  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.753417  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:15.753421  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:15.753471  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:15.780750  656123 cri.go:89] found id: ""
	I1006 14:31:15.780775  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.780783  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:15.780788  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:15.780842  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:15.807372  656123 cri.go:89] found id: ""
	I1006 14:31:15.807388  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.807401  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:15.807406  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:15.807461  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:15.834188  656123 cri.go:89] found id: ""
	I1006 14:31:15.834222  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.834233  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:15.834240  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:15.834293  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:15.861606  656123 cri.go:89] found id: ""
	I1006 14:31:15.861624  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.861631  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:15.861636  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:15.861702  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:15.888991  656123 cri.go:89] found id: ""
	I1006 14:31:15.889007  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.889014  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:15.889022  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:15.889035  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:15.956002  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:15.956024  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:15.969830  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:15.969850  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:16.026629  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:16.019009   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.019537   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021047   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021513   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.023044   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:16.019009   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.019537   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021047   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021513   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.023044   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:16.026643  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:16.026656  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:16.085192  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:16.085220  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:18.616289  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:18.627239  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:18.627304  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:18.655298  656123 cri.go:89] found id: ""
	I1006 14:31:18.655318  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.655327  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:18.655334  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:18.655392  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:18.682590  656123 cri.go:89] found id: ""
	I1006 14:31:18.682609  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.682616  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:18.682623  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:18.682684  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:18.709329  656123 cri.go:89] found id: ""
	I1006 14:31:18.709349  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.709359  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:18.709366  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:18.709428  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:18.735272  656123 cri.go:89] found id: ""
	I1006 14:31:18.735292  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.735302  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:18.735309  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:18.735370  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:18.761956  656123 cri.go:89] found id: ""
	I1006 14:31:18.761973  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.761980  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:18.761984  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:18.762047  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:18.788186  656123 cri.go:89] found id: ""
	I1006 14:31:18.788224  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.788234  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:18.788241  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:18.788293  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:18.814751  656123 cri.go:89] found id: ""
	I1006 14:31:18.814768  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.814775  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:18.814783  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:18.814793  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:18.874634  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:18.867140   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.867734   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869314   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869766   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.871291   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:18.867140   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.867734   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869314   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869766   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.871291   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:18.874645  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:18.874658  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:18.934741  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:18.934765  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:18.964835  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:18.964857  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:19.034348  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:19.034372  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:21.549097  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:21.560431  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:21.560497  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:21.588270  656123 cri.go:89] found id: ""
	I1006 14:31:21.588285  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.588292  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:21.588297  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:21.588352  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:21.615501  656123 cri.go:89] found id: ""
	I1006 14:31:21.615519  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.615527  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:21.615532  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:21.615590  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:21.643122  656123 cri.go:89] found id: ""
	I1006 14:31:21.643143  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.643150  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:21.643154  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:21.643222  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:21.670611  656123 cri.go:89] found id: ""
	I1006 14:31:21.670628  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.670635  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:21.670642  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:21.670705  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:21.698443  656123 cri.go:89] found id: ""
	I1006 14:31:21.698460  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.698467  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:21.698472  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:21.698521  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:21.726957  656123 cri.go:89] found id: ""
	I1006 14:31:21.726973  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.726981  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:21.726986  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:21.727032  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:21.754606  656123 cri.go:89] found id: ""
	I1006 14:31:21.754628  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.754638  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:21.754648  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:21.754661  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:21.814709  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:21.814731  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:21.846526  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:21.846543  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:21.915125  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:21.915156  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:21.929444  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:21.929482  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:21.988239  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:21.980740   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.981329   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.982927   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.983357   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.984775   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:21.980740   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.981329   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.982927   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.983357   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.984775   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:24.489339  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:24.500246  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:24.500303  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:24.527224  656123 cri.go:89] found id: ""
	I1006 14:31:24.527243  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.527253  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:24.527258  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:24.527309  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:24.552540  656123 cri.go:89] found id: ""
	I1006 14:31:24.552559  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.552567  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:24.552573  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:24.552636  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:24.581110  656123 cri.go:89] found id: ""
	I1006 14:31:24.581125  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.581131  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:24.581138  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:24.581201  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:24.607563  656123 cri.go:89] found id: ""
	I1006 14:31:24.607580  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.607588  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:24.607592  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:24.607649  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:24.633221  656123 cri.go:89] found id: ""
	I1006 14:31:24.633241  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.633249  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:24.633255  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:24.633303  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:24.658521  656123 cri.go:89] found id: ""
	I1006 14:31:24.658540  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.658547  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:24.658552  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:24.658611  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:24.684336  656123 cri.go:89] found id: ""
	I1006 14:31:24.684351  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.684358  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:24.684367  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:24.684381  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:24.743258  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:24.735488   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.735921   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.737653   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.738173   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.739491   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:24.735488   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.735921   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.737653   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.738173   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.739491   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:24.743270  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:24.743283  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:24.802373  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:24.802398  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:24.832699  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:24.832716  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:24.898746  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:24.898768  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:27.413617  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:27.424393  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:27.424454  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:27.452153  656123 cri.go:89] found id: ""
	I1006 14:31:27.452173  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.452181  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:27.452186  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:27.452268  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:27.477797  656123 cri.go:89] found id: ""
	I1006 14:31:27.477815  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.477822  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:27.477827  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:27.477881  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:27.502952  656123 cri.go:89] found id: ""
	I1006 14:31:27.502971  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.502978  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:27.502983  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:27.503039  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:27.529416  656123 cri.go:89] found id: ""
	I1006 14:31:27.529433  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.529440  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:27.529444  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:27.529504  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:27.554632  656123 cri.go:89] found id: ""
	I1006 14:31:27.554651  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.554659  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:27.554664  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:27.554713  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:27.580924  656123 cri.go:89] found id: ""
	I1006 14:31:27.580942  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.580948  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:27.580954  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:27.581007  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:27.605807  656123 cri.go:89] found id: ""
	I1006 14:31:27.605826  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.605836  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:27.605846  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:27.605860  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:27.618904  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:27.618922  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:27.677305  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:27.669937   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.670557   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672091   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672543   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.673638   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:27.669937   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.670557   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672091   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672543   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.673638   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:27.677315  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:27.677326  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:27.739103  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:27.739125  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:27.767028  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:27.767049  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:30.336333  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:30.348665  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:30.348724  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:30.377945  656123 cri.go:89] found id: ""
	I1006 14:31:30.377963  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.377973  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:30.377979  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:30.378035  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:30.406369  656123 cri.go:89] found id: ""
	I1006 14:31:30.406391  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.406400  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:30.406407  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:30.406484  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:30.435610  656123 cri.go:89] found id: ""
	I1006 14:31:30.435634  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.435644  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:30.435650  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:30.435715  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:30.464182  656123 cri.go:89] found id: ""
	I1006 14:31:30.464201  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.464222  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:30.464230  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:30.464285  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:30.493191  656123 cri.go:89] found id: ""
	I1006 14:31:30.493237  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.493254  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:30.493260  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:30.493313  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:30.522664  656123 cri.go:89] found id: ""
	I1006 14:31:30.522684  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.522695  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:30.522702  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:30.522762  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:30.553858  656123 cri.go:89] found id: ""
	I1006 14:31:30.553874  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.553880  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:30.553891  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:30.553905  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:30.625537  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:30.625563  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:30.641100  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:30.641127  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:30.705527  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:30.696933   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.697691   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699345   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699934   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.701560   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:30.696933   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.697691   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699345   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699934   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.701560   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:30.705543  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:30.705560  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:30.768236  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:30.768263  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:33.302531  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:33.314251  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:33.314308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:33.343374  656123 cri.go:89] found id: ""
	I1006 14:31:33.343394  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.343404  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:33.343411  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:33.343491  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:33.369870  656123 cri.go:89] found id: ""
	I1006 14:31:33.369885  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.369891  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:33.369895  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:33.369944  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:33.394611  656123 cri.go:89] found id: ""
	I1006 14:31:33.394631  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.394640  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:33.394647  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:33.394696  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:33.420323  656123 cri.go:89] found id: ""
	I1006 14:31:33.420338  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.420345  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:33.420350  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:33.420399  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:33.446454  656123 cri.go:89] found id: ""
	I1006 14:31:33.446483  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.446493  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:33.446501  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:33.446557  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:33.471998  656123 cri.go:89] found id: ""
	I1006 14:31:33.472013  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.472019  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:33.472025  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:33.472073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:33.498038  656123 cri.go:89] found id: ""
	I1006 14:31:33.498052  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.498058  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:33.498067  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:33.498077  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:33.554956  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:33.547323   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.547831   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549458   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549938   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.551501   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:33.547323   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.547831   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549458   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549938   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.551501   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:33.554967  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:33.554978  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:33.617723  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:33.617747  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:33.647466  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:33.647482  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:33.718107  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:33.718128  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:36.233955  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:36.245297  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:36.245362  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:36.272483  656123 cri.go:89] found id: ""
	I1006 14:31:36.272502  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.272509  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:36.272515  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:36.272574  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:36.299177  656123 cri.go:89] found id: ""
	I1006 14:31:36.299192  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.299199  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:36.299229  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:36.299284  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:36.325899  656123 cri.go:89] found id: ""
	I1006 14:31:36.325920  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.325938  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:36.325946  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:36.326000  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:36.353043  656123 cri.go:89] found id: ""
	I1006 14:31:36.353059  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.353065  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:36.353070  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:36.353117  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:36.379229  656123 cri.go:89] found id: ""
	I1006 14:31:36.379249  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.379259  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:36.379263  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:36.379320  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:36.407572  656123 cri.go:89] found id: ""
	I1006 14:31:36.407589  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.407596  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:36.407601  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:36.407651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:36.435005  656123 cri.go:89] found id: ""
	I1006 14:31:36.435022  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.435028  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:36.435036  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:36.435047  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:36.512293  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:36.512319  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:36.526942  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:36.526966  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:36.587325  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:36.579436   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.579991   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.581727   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.582244   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.583796   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:36.579436   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.579991   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.581727   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.582244   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.583796   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:36.587336  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:36.587349  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:36.648638  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:36.648672  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:39.181798  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:39.193122  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:39.193188  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:39.221286  656123 cri.go:89] found id: ""
	I1006 14:31:39.221304  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.221312  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:39.221317  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:39.221376  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:39.248422  656123 cri.go:89] found id: ""
	I1006 14:31:39.248437  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.248445  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:39.248450  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:39.248497  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:39.277291  656123 cri.go:89] found id: ""
	I1006 14:31:39.277308  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.277316  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:39.277322  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:39.277390  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:39.303982  656123 cri.go:89] found id: ""
	I1006 14:31:39.303999  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.304005  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:39.304011  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:39.304062  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:39.330654  656123 cri.go:89] found id: ""
	I1006 14:31:39.330674  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.330681  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:39.330686  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:39.330732  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:39.357141  656123 cri.go:89] found id: ""
	I1006 14:31:39.357156  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.357163  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:39.357168  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:39.357241  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:39.383968  656123 cri.go:89] found id: ""
	I1006 14:31:39.383986  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.383993  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:39.384002  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:39.384014  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:39.451579  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:39.451604  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:39.465454  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:39.465475  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:39.523259  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:39.515550   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.516185   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.517720   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.518181   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.519823   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:39.515550   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.516185   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.517720   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.518181   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.519823   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:39.523273  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:39.523285  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:39.585241  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:39.585265  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:42.115015  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:42.126583  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:42.126634  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:42.153385  656123 cri.go:89] found id: ""
	I1006 14:31:42.153406  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.153416  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:42.153422  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:42.153479  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:42.181021  656123 cri.go:89] found id: ""
	I1006 14:31:42.181039  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.181049  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:42.181055  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:42.181116  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:42.208104  656123 cri.go:89] found id: ""
	I1006 14:31:42.208123  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.208133  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:42.208139  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:42.208190  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:42.235099  656123 cri.go:89] found id: ""
	I1006 14:31:42.235115  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.235123  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:42.235128  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:42.235176  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:42.262052  656123 cri.go:89] found id: ""
	I1006 14:31:42.262072  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.262079  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:42.262084  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:42.262142  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:42.288093  656123 cri.go:89] found id: ""
	I1006 14:31:42.288111  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.288119  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:42.288124  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:42.288179  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:42.314049  656123 cri.go:89] found id: ""
	I1006 14:31:42.314068  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.314076  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:42.314087  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:42.314100  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:42.379866  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:42.379892  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:42.393937  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:42.393965  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:42.452376  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:42.444669   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.445228   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.446633   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.447200   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.448583   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:42.444669   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.445228   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.446633   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.447200   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.448583   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:42.452388  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:42.452400  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:42.513323  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:42.513346  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:45.045836  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:45.056587  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:45.056634  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:45.082895  656123 cri.go:89] found id: ""
	I1006 14:31:45.082913  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.082922  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:45.082930  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:45.082981  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:45.109560  656123 cri.go:89] found id: ""
	I1006 14:31:45.109579  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.109589  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:45.109595  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:45.109651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:45.136033  656123 cri.go:89] found id: ""
	I1006 14:31:45.136055  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.136065  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:45.136072  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:45.136145  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:45.162396  656123 cri.go:89] found id: ""
	I1006 14:31:45.162416  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.162423  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:45.162427  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:45.162493  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:45.188063  656123 cri.go:89] found id: ""
	I1006 14:31:45.188077  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.188084  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:45.188090  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:45.188135  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:45.214119  656123 cri.go:89] found id: ""
	I1006 14:31:45.214140  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.214150  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:45.214157  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:45.214234  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:45.242147  656123 cri.go:89] found id: ""
	I1006 14:31:45.242166  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.242176  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:45.242187  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:45.242201  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:45.311929  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:45.311952  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:45.324994  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:45.325015  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:45.381458  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:45.373267   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374021   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374992   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.376701   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.377102   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:45.373267   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374021   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374992   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.376701   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.377102   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:45.381470  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:45.381483  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:45.445634  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:45.445652  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:47.975088  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:47.986084  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:47.986144  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:48.013186  656123 cri.go:89] found id: ""
	I1006 14:31:48.013218  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.013229  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:48.013235  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:48.013289  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:48.039286  656123 cri.go:89] found id: ""
	I1006 14:31:48.039301  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.039308  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:48.039313  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:48.039361  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:48.065798  656123 cri.go:89] found id: ""
	I1006 14:31:48.065813  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.065821  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:48.065826  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:48.065873  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:48.091102  656123 cri.go:89] found id: ""
	I1006 14:31:48.091119  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.091128  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:48.091133  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:48.091188  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:48.117766  656123 cri.go:89] found id: ""
	I1006 14:31:48.117783  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.117790  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:48.117795  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:48.117844  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:48.144583  656123 cri.go:89] found id: ""
	I1006 14:31:48.144598  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.144604  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:48.144609  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:48.144655  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:48.171397  656123 cri.go:89] found id: ""
	I1006 14:31:48.171413  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.171421  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:48.171429  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:48.171440  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:48.232721  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:48.232743  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:48.262521  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:48.262540  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:48.332831  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:48.332851  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:48.346228  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:48.346247  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:48.402332  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:48.395067   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.395636   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397181   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397582   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.399142   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:48.395067   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.395636   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397181   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397582   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.399142   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:50.903091  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:50.914581  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:50.914643  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:50.940118  656123 cri.go:89] found id: ""
	I1006 14:31:50.940134  656123 logs.go:282] 0 containers: []
	W1006 14:31:50.940144  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:50.940152  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:50.940244  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:50.967927  656123 cri.go:89] found id: ""
	I1006 14:31:50.967942  656123 logs.go:282] 0 containers: []
	W1006 14:31:50.967950  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:50.967955  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:50.968012  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:50.994911  656123 cri.go:89] found id: ""
	I1006 14:31:50.994926  656123 logs.go:282] 0 containers: []
	W1006 14:31:50.994933  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:50.994938  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:50.994983  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:51.021349  656123 cri.go:89] found id: ""
	I1006 14:31:51.021367  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.021376  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:51.021381  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:51.021450  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:51.047856  656123 cri.go:89] found id: ""
	I1006 14:31:51.047875  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.047885  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:51.047892  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:51.047953  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:51.074984  656123 cri.go:89] found id: ""
	I1006 14:31:51.075002  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.075009  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:51.075014  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:51.075076  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:51.102644  656123 cri.go:89] found id: ""
	I1006 14:31:51.102660  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.102668  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:51.102677  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:51.102692  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:51.164842  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:51.164869  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:51.194272  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:51.194293  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:51.264785  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:51.264809  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:51.279283  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:51.279311  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:51.337565  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:51.329770   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.330346   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.331936   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.332399   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.334039   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:51.329770   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.330346   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.331936   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.332399   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.334039   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:53.839279  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:53.850387  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:53.850446  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:53.878099  656123 cri.go:89] found id: ""
	I1006 14:31:53.878125  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.878135  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:53.878142  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:53.878199  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:53.905974  656123 cri.go:89] found id: ""
	I1006 14:31:53.905994  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.906004  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:53.906011  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:53.906073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:53.934338  656123 cri.go:89] found id: ""
	I1006 14:31:53.934355  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.934362  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:53.934367  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:53.934417  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:53.961409  656123 cri.go:89] found id: ""
	I1006 14:31:53.961428  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.961436  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:53.961442  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:53.961492  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:53.988451  656123 cri.go:89] found id: ""
	I1006 14:31:53.988468  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.988475  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:53.988481  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:53.988541  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:54.015683  656123 cri.go:89] found id: ""
	I1006 14:31:54.015703  656123 logs.go:282] 0 containers: []
	W1006 14:31:54.015712  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:54.015718  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:54.015769  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:54.043179  656123 cri.go:89] found id: ""
	I1006 14:31:54.043196  656123 logs.go:282] 0 containers: []
	W1006 14:31:54.043215  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:54.043226  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:54.043242  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:54.107582  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:54.107606  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:54.138057  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:54.138078  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:54.204366  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:54.204394  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:54.218513  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:54.218535  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:54.279164  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:54.271489   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.272091   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.273620   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.274071   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.275622   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:54.271489   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.272091   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.273620   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.274071   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.275622   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:56.780360  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:56.791915  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:56.791969  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:56.817452  656123 cri.go:89] found id: ""
	I1006 14:31:56.817470  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.817477  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:56.817483  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:56.817529  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:56.842632  656123 cri.go:89] found id: ""
	I1006 14:31:56.842646  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.842653  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:56.842657  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:56.842700  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:56.870346  656123 cri.go:89] found id: ""
	I1006 14:31:56.870361  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.870368  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:56.870373  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:56.870421  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:56.898085  656123 cri.go:89] found id: ""
	I1006 14:31:56.898102  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.898107  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:56.898112  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:56.898172  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:56.925826  656123 cri.go:89] found id: ""
	I1006 14:31:56.925842  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.925849  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:56.925854  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:56.925917  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:56.952736  656123 cri.go:89] found id: ""
	I1006 14:31:56.952753  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.952759  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:56.952764  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:56.952817  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:56.981505  656123 cri.go:89] found id: ""
	I1006 14:31:56.981524  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.981534  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:56.981544  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:56.981558  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:57.038974  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:57.031730   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.032302   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.033897   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.034349   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.035558   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:57.031730   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.032302   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.033897   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.034349   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.035558   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:57.038998  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:57.039009  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:57.104175  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:57.104199  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:57.133096  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:57.133118  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:57.198894  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:57.198924  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:59.714028  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:59.725916  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:59.725972  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:59.751782  656123 cri.go:89] found id: ""
	I1006 14:31:59.751801  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.751810  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:59.751816  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:59.751864  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:59.776851  656123 cri.go:89] found id: ""
	I1006 14:31:59.776867  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.776874  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:59.776878  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:59.776924  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:59.800431  656123 cri.go:89] found id: ""
	I1006 14:31:59.800447  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.800455  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:59.800467  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:59.800530  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:59.825387  656123 cri.go:89] found id: ""
	I1006 14:31:59.825404  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.825412  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:59.825423  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:59.825468  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:59.849169  656123 cri.go:89] found id: ""
	I1006 14:31:59.849186  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.849195  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:59.849232  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:59.849291  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:59.874820  656123 cri.go:89] found id: ""
	I1006 14:31:59.874835  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.874841  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:59.874846  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:59.874893  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:59.900818  656123 cri.go:89] found id: ""
	I1006 14:31:59.900834  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.900840  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:59.900848  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:59.900860  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:59.957989  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:59.950533   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.951047   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.952664   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.953012   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.954540   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:59.950533   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.951047   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.952664   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.953012   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.954540   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:59.958004  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:59.958025  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:00.016244  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:00.016287  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:00.047330  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:00.047346  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:00.111078  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:00.111104  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:02.626253  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:02.637551  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:02.637606  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:02.665023  656123 cri.go:89] found id: ""
	I1006 14:32:02.665040  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.665050  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:02.665056  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:02.665118  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:02.692374  656123 cri.go:89] found id: ""
	I1006 14:32:02.692397  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.692404  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:02.692409  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:02.692458  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:02.719922  656123 cri.go:89] found id: ""
	I1006 14:32:02.719942  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.719953  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:02.719959  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:02.720014  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:02.746934  656123 cri.go:89] found id: ""
	I1006 14:32:02.746950  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.746956  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:02.746962  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:02.747009  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:02.774417  656123 cri.go:89] found id: ""
	I1006 14:32:02.774435  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.774442  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:02.774447  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:02.774496  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:02.801761  656123 cri.go:89] found id: ""
	I1006 14:32:02.801776  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.801783  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:02.801788  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:02.801844  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:02.828981  656123 cri.go:89] found id: ""
	I1006 14:32:02.828998  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.829005  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:02.829014  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:02.829028  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:02.895754  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:02.895778  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:02.909930  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:02.909950  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:02.968533  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:02.961042   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.961577   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963104   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963565   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.965085   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:02.961042   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.961577   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963104   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963565   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.965085   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:02.968546  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:02.968560  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:03.033943  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:03.033967  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:05.566153  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:05.577534  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:05.577601  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:05.604282  656123 cri.go:89] found id: ""
	I1006 14:32:05.604301  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.604311  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:05.604317  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:05.604375  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:05.631089  656123 cri.go:89] found id: ""
	I1006 14:32:05.631105  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.631112  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:05.631116  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:05.631172  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:05.658464  656123 cri.go:89] found id: ""
	I1006 14:32:05.658484  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.658495  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:05.658501  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:05.658559  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:05.685951  656123 cri.go:89] found id: ""
	I1006 14:32:05.685971  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.685980  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:05.685987  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:05.686043  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:05.712003  656123 cri.go:89] found id: ""
	I1006 14:32:05.712020  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.712028  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:05.712033  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:05.712093  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:05.740632  656123 cri.go:89] found id: ""
	I1006 14:32:05.740652  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.740660  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:05.740667  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:05.740728  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:05.766042  656123 cri.go:89] found id: ""
	I1006 14:32:05.766064  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.766072  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:05.766080  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:05.766092  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:05.837102  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:05.837132  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:05.851014  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:05.851038  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:05.910902  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:05.903038   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.903650   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905294   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905834   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.907440   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:05.903038   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.903650   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905294   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905834   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.907440   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:05.910914  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:05.910927  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:05.975171  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:05.975197  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:08.507407  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:08.518312  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:08.518362  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:08.544556  656123 cri.go:89] found id: ""
	I1006 14:32:08.544575  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.544585  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:08.544591  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:08.544646  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:08.569832  656123 cri.go:89] found id: ""
	I1006 14:32:08.569849  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.569858  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:08.569863  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:08.569911  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:08.595352  656123 cri.go:89] found id: ""
	I1006 14:32:08.595368  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.595377  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:08.595383  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:08.595447  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:08.621980  656123 cri.go:89] found id: ""
	I1006 14:32:08.621995  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.622001  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:08.622006  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:08.622062  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:08.648436  656123 cri.go:89] found id: ""
	I1006 14:32:08.648451  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.648458  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:08.648462  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:08.648519  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:08.673561  656123 cri.go:89] found id: ""
	I1006 14:32:08.673579  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.673589  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:08.673595  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:08.673657  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:08.699829  656123 cri.go:89] found id: ""
	I1006 14:32:08.699847  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.699855  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:08.699866  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:08.699884  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:08.712951  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:08.712972  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:08.769035  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:08.761477   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.762001   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.763631   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.764099   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.765640   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:08.761477   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.762001   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.763631   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.764099   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.765640   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:08.769047  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:08.769063  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:08.832511  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:08.832534  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:08.861346  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:08.861364  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:11.430582  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:11.441872  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:11.441923  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:11.467567  656123 cri.go:89] found id: ""
	I1006 14:32:11.467586  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.467596  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:11.467603  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:11.467660  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:11.494656  656123 cri.go:89] found id: ""
	I1006 14:32:11.494683  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.494690  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:11.494695  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:11.494743  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:11.521748  656123 cri.go:89] found id: ""
	I1006 14:32:11.521763  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.521770  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:11.521775  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:11.521820  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:11.548602  656123 cri.go:89] found id: ""
	I1006 14:32:11.548620  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.548626  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:11.548632  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:11.548691  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:11.576572  656123 cri.go:89] found id: ""
	I1006 14:32:11.576588  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.576595  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:11.576600  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:11.576651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:11.603326  656123 cri.go:89] found id: ""
	I1006 14:32:11.603346  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.603355  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:11.603360  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:11.603415  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:11.629710  656123 cri.go:89] found id: ""
	I1006 14:32:11.629728  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.629738  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:11.629747  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:11.629757  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:11.700650  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:11.700726  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:11.714603  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:11.714630  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:11.772602  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:11.764966   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.765455   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767171   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767660   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.769186   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:11.764966   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.765455   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767171   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767660   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.769186   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:11.772614  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:11.772626  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:11.833230  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:11.833254  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:14.365875  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:14.376698  656123 kubeadm.go:601] duration metric: took 4m4.218544485s to restartPrimaryControlPlane
	W1006 14:32:14.376820  656123 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1006 14:32:14.376904  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:32:14.835776  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:32:14.848804  656123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:32:14.857253  656123 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:32:14.857310  656123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:32:14.864786  656123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:32:14.864795  656123 kubeadm.go:157] found existing configuration files:
	
	I1006 14:32:14.864835  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:32:14.872239  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:32:14.872285  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:32:14.879414  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:32:14.886697  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:32:14.886741  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:32:14.893638  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:32:14.900861  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:32:14.900895  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:32:14.907789  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:32:14.914902  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:32:14.914933  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:32:14.921800  656123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:32:14.978601  656123 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:32:15.038549  656123 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:36:17.406896  656123 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:36:17.407019  656123 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:36:17.410627  656123 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:36:17.410683  656123 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:36:17.410779  656123 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:36:17.410840  656123 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:36:17.410869  656123 kubeadm.go:318] OS: Linux
	I1006 14:36:17.410914  656123 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:36:17.410949  656123 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:36:17.411007  656123 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:36:17.411060  656123 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:36:17.411098  656123 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:36:17.411140  656123 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:36:17.411189  656123 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:36:17.411245  656123 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:36:17.411317  656123 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:36:17.411401  656123 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:36:17.411485  656123 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:36:17.411556  656123 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:36:17.413722  656123 out.go:252]   - Generating certificates and keys ...
	I1006 14:36:17.413795  656123 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:36:17.413884  656123 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:36:17.413987  656123 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:36:17.414057  656123 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:36:17.414137  656123 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:36:17.414181  656123 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:36:17.414260  656123 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:36:17.414334  656123 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:36:17.414439  656123 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:36:17.414518  656123 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:36:17.414578  656123 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:36:17.414662  656123 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:36:17.414728  656123 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:36:17.414803  656123 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:36:17.414845  656123 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:36:17.414916  656123 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:36:17.414967  656123 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:36:17.415028  656123 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:36:17.415104  656123 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:36:17.416892  656123 out.go:252]   - Booting up control plane ...
	I1006 14:36:17.416963  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:36:17.417045  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:36:17.417099  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:36:17.417195  656123 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:36:17.417298  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:36:17.417388  656123 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:36:17.417462  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:36:17.417493  656123 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:36:17.417595  656123 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:36:17.417679  656123 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:36:17.417755  656123 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.528699ms
	I1006 14:36:17.417834  656123 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:36:17.417918  656123 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1006 14:36:17.418000  656123 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:36:17.418064  656123 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:36:17.418126  656123 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000416419s
	I1006 14:36:17.418196  656123 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000737625s
	I1006 14:36:17.418279  656123 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00070414s
	I1006 14:36:17.418282  656123 kubeadm.go:318] 
	I1006 14:36:17.418350  656123 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:36:17.418415  656123 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:36:17.418514  656123 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:36:17.418595  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:36:17.418668  656123 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:36:17.418749  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:36:17.418809  656123 kubeadm.go:318] 
	W1006 14:36:17.418920  656123 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.528699ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000416419s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000737625s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00070414s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:36:17.419037  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:36:17.865331  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:36:17.878364  656123 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:36:17.878407  656123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:36:17.886488  656123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:36:17.886495  656123 kubeadm.go:157] found existing configuration files:
	
	I1006 14:36:17.886535  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:36:17.894142  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:36:17.894180  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:36:17.901791  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:36:17.909427  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:36:17.909474  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:36:17.916720  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:36:17.924474  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:36:17.924517  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:36:17.931765  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:36:17.939342  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:36:17.939397  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:36:17.947232  656123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:36:17.986103  656123 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:36:17.986155  656123 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:36:18.005746  656123 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:36:18.005847  656123 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:36:18.005884  656123 kubeadm.go:318] OS: Linux
	I1006 14:36:18.005928  656123 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:36:18.005966  656123 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:36:18.006009  656123 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:36:18.006047  656123 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:36:18.006115  656123 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:36:18.006229  656123 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:36:18.006274  656123 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:36:18.006314  656123 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:36:18.063701  656123 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:36:18.063828  656123 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:36:18.063979  656123 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:36:18.070276  656123 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:36:18.073073  656123 out.go:252]   - Generating certificates and keys ...
	I1006 14:36:18.073146  656123 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:36:18.073230  656123 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:36:18.073310  656123 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:36:18.073360  656123 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:36:18.073469  656123 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:36:18.073537  656123 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:36:18.073593  656123 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:36:18.073643  656123 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:36:18.073731  656123 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:36:18.073828  656123 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:36:18.073881  656123 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:36:18.073950  656123 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:36:18.358369  656123 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:36:18.660416  656123 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:36:18.904822  656123 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:36:19.181972  656123 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:36:19.419333  656123 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:36:19.419883  656123 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:36:19.422018  656123 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:36:19.424552  656123 out.go:252]   - Booting up control plane ...
	I1006 14:36:19.424633  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:36:19.424695  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:36:19.424766  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:36:19.438773  656123 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:36:19.438935  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:36:19.446167  656123 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:36:19.446370  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:36:19.446407  656123 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:36:19.549636  656123 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:36:19.549773  656123 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:36:21.051643  656123 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501975645s
	I1006 14:36:21.055540  656123 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:36:21.055642  656123 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1006 14:36:21.055761  656123 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:36:21.055838  656123 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:40:21.055953  656123 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	I1006 14:40:21.056046  656123 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	I1006 14:40:21.056101  656123 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	I1006 14:40:21.056104  656123 kubeadm.go:318] 
	I1006 14:40:21.056173  656123 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:40:21.056304  656123 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:40:21.056432  656123 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:40:21.056532  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:40:21.056641  656123 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:40:21.056764  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:40:21.056770  656123 kubeadm.go:318] 
	I1006 14:40:21.060023  656123 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:40:21.060145  656123 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:40:21.060722  656123 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	I1006 14:40:21.060819  656123 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:40:21.060909  656123 kubeadm.go:402] duration metric: took 12m10.94114452s to StartCluster
	I1006 14:40:21.060976  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:40:21.061036  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:40:21.089107  656123 cri.go:89] found id: ""
	I1006 14:40:21.089130  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.089137  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:40:21.089143  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:40:21.089218  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:40:21.116923  656123 cri.go:89] found id: ""
	I1006 14:40:21.116942  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.116948  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:40:21.116954  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:40:21.117001  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:40:21.144161  656123 cri.go:89] found id: ""
	I1006 14:40:21.144196  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.144219  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:40:21.144227  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:40:21.144287  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:40:21.173031  656123 cri.go:89] found id: ""
	I1006 14:40:21.173051  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.173059  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:40:21.173065  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:40:21.173117  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:40:21.200194  656123 cri.go:89] found id: ""
	I1006 14:40:21.200232  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.200242  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:40:21.200249  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:40:21.200313  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:40:21.227692  656123 cri.go:89] found id: ""
	I1006 14:40:21.227708  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.227715  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:40:21.227720  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:40:21.227777  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:40:21.255803  656123 cri.go:89] found id: ""
	I1006 14:40:21.255827  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.255836  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:40:21.255848  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:40:21.255863  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:40:21.269683  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:40:21.269708  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:40:21.330259  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:40:21.322987   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.323612   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.324719   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.325098   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.326635   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:40:21.322987   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.323612   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.324719   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.325098   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.326635   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:40:21.330282  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:40:21.330295  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:40:21.395010  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:40:21.395036  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:40:21.425956  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:40:21.425975  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1006 14:40:21.494244  656123 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:40:21.494316  656123 out.go:285] * 
	W1006 14:40:21.494402  656123 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:40:21.494415  656123 out.go:285] * 
	W1006 14:40:21.496145  656123 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:40:21.499891  656123 out.go:203] 
	W1006 14:40:21.500973  656123 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:40:21.500999  656123 out.go:285] * 
	I1006 14:40:21.502231  656123 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:40:14 functional-135520 crio[5849]: time="2025-10-06T14:40:14.002436576Z" level=info msg="createCtr: removing container d09a83215e7ba678a591274f52a3c4e3bbafe4f50c309bdbad0db08fd40f72ad" id=7954ab01-b4a1-4af8-864a-83bce242e907 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:14 functional-135520 crio[5849]: time="2025-10-06T14:40:14.002464878Z" level=info msg="createCtr: deleting container d09a83215e7ba678a591274f52a3c4e3bbafe4f50c309bdbad0db08fd40f72ad from storage" id=7954ab01-b4a1-4af8-864a-83bce242e907 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:14 functional-135520 crio[5849]: time="2025-10-06T14:40:14.004394482Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-135520_kube-system_9c0f460a73b4e4a7087ce2a722c4cad4_0" id=7954ab01-b4a1-4af8-864a-83bce242e907 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:17 functional-135520 crio[5849]: time="2025-10-06T14:40:17.980597758Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=9307489c-7a13-4906-9ddf-5af7e3827d27 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:17 functional-135520 crio[5849]: time="2025-10-06T14:40:17.981492601Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=a030c920-74cb-44f7-9d05-4afb02030a5a name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:17 functional-135520 crio[5849]: time="2025-10-06T14:40:17.982361324Z" level=info msg="Creating container: kube-system/etcd-functional-135520/etcd" id=8347184d-14c3-48dc-9459-ef660db6f6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:17 functional-135520 crio[5849]: time="2025-10-06T14:40:17.982590193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:17 functional-135520 crio[5849]: time="2025-10-06T14:40:17.985847299Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:17 functional-135520 crio[5849]: time="2025-10-06T14:40:17.986311869Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:18 functional-135520 crio[5849]: time="2025-10-06T14:40:18.001227615Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8347184d-14c3-48dc-9459-ef660db6f6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:18 functional-135520 crio[5849]: time="2025-10-06T14:40:18.00268739Z" level=info msg="createCtr: deleting container ID 33cdc58a0c490dce49db5b8cff183237a957ec9252749f4025e5d44a3011f822 from idIndex" id=8347184d-14c3-48dc-9459-ef660db6f6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:18 functional-135520 crio[5849]: time="2025-10-06T14:40:18.002729594Z" level=info msg="createCtr: removing container 33cdc58a0c490dce49db5b8cff183237a957ec9252749f4025e5d44a3011f822" id=8347184d-14c3-48dc-9459-ef660db6f6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:18 functional-135520 crio[5849]: time="2025-10-06T14:40:18.002765547Z" level=info msg="createCtr: deleting container 33cdc58a0c490dce49db5b8cff183237a957ec9252749f4025e5d44a3011f822 from storage" id=8347184d-14c3-48dc-9459-ef660db6f6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:18 functional-135520 crio[5849]: time="2025-10-06T14:40:18.004797529Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-135520_kube-system_f24ebbe4b3fc964d32e35d345c0d3653_0" id=8347184d-14c3-48dc-9459-ef660db6f6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:20 functional-135520 crio[5849]: time="2025-10-06T14:40:20.979681419Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=2443f8a8-1b76-4132-aa6d-cfe7c76e178d name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:20 functional-135520 crio[5849]: time="2025-10-06T14:40:20.98042903Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=679e6e26-978c-44f1-a68d-da03ad309e01 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:20 functional-135520 crio[5849]: time="2025-10-06T14:40:20.981333955Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-135520/kube-scheduler" id=d9b13087-0047-423a-b1ba-8f1b16d6d4e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:20 functional-135520 crio[5849]: time="2025-10-06T14:40:20.981653588Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:20 functional-135520 crio[5849]: time="2025-10-06T14:40:20.985602553Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:20 functional-135520 crio[5849]: time="2025-10-06T14:40:20.986026326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:21 functional-135520 crio[5849]: time="2025-10-06T14:40:21.002198437Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d9b13087-0047-423a-b1ba-8f1b16d6d4e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:21 functional-135520 crio[5849]: time="2025-10-06T14:40:21.003805453Z" level=info msg="createCtr: deleting container ID a98ed8aedfa1ef039e7da182e75565487fd26606af94b95038816b9a7b11df7d from idIndex" id=d9b13087-0047-423a-b1ba-8f1b16d6d4e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:21 functional-135520 crio[5849]: time="2025-10-06T14:40:21.00384221Z" level=info msg="createCtr: removing container a98ed8aedfa1ef039e7da182e75565487fd26606af94b95038816b9a7b11df7d" id=d9b13087-0047-423a-b1ba-8f1b16d6d4e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:21 functional-135520 crio[5849]: time="2025-10-06T14:40:21.003874157Z" level=info msg="createCtr: deleting container a98ed8aedfa1ef039e7da182e75565487fd26606af94b95038816b9a7b11df7d from storage" id=d9b13087-0047-423a-b1ba-8f1b16d6d4e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:21 functional-135520 crio[5849]: time="2025-10-06T14:40:21.006007213Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-135520_kube-system_5115bd1eba9594a3f2b99b5d6a4b9d59_0" id=d9b13087-0047-423a-b1ba-8f1b16d6d4e9 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:40:22.706309   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:22.706911   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:22.708496   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:22.708888   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:22.710550   15740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:40:22 up  5:22,  0 user,  load average: 0.00, 0.04, 0.24
	Linux functional-135520 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:40:14 functional-135520 kubelet[14966]:         container kube-apiserver start failed in pod kube-apiserver-functional-135520_kube-system(9c0f460a73b4e4a7087ce2a722c4cad4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:14 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:14 functional-135520 kubelet[14966]: E1006 14:40:14.004762   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-135520" podUID="9c0f460a73b4e4a7087ce2a722c4cad4"
	Oct 06 14:40:16 functional-135520 kubelet[14966]: E1006 14:40:16.019446   14966 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-135520.186beda7023a08f5  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-135520,UID:functional-135520,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-135520 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-135520,},FirstTimestamp:2025-10-06 14:36:20.970989813 +0000 UTC m=+1.419813170,LastTimestamp:2025-10-06 14:36:20.970989813 +0000 UTC m=+1.419813170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-135520,}"
	Oct 06 14:40:17 functional-135520 kubelet[14966]: E1006 14:40:17.081734   14966 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 06 14:40:17 functional-135520 kubelet[14966]: E1006 14:40:17.600142   14966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-135520?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 06 14:40:17 functional-135520 kubelet[14966]: I1006 14:40:17.758034   14966 kubelet_node_status.go:75] "Attempting to register node" node="functional-135520"
	Oct 06 14:40:17 functional-135520 kubelet[14966]: E1006 14:40:17.758411   14966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-135520"
	Oct 06 14:40:17 functional-135520 kubelet[14966]: E1006 14:40:17.980098   14966 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:40:18 functional-135520 kubelet[14966]: E1006 14:40:18.005131   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:40:18 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:18 functional-135520 kubelet[14966]:  > podSandboxID="91ab0a64f17ca953284929376780a86381ab6a8cae1f4af7da89790dc4c0e8df"
	Oct 06 14:40:18 functional-135520 kubelet[14966]: E1006 14:40:18.005270   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:18 functional-135520 kubelet[14966]:         container etcd start failed in pod etcd-functional-135520_kube-system(f24ebbe4b3fc964d32e35d345c0d3653): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:18 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:18 functional-135520 kubelet[14966]: E1006 14:40:18.005308   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-135520" podUID="f24ebbe4b3fc964d32e35d345c0d3653"
	Oct 06 14:40:20 functional-135520 kubelet[14966]: E1006 14:40:20.979281   14966 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:40:20 functional-135520 kubelet[14966]: E1006 14:40:20.993487   14966 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-135520\" not found"
	Oct 06 14:40:21 functional-135520 kubelet[14966]: E1006 14:40:21.006289   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:40:21 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:21 functional-135520 kubelet[14966]:  > podSandboxID="526b997044ad8cc54e45aef5a5faa2edaadc9cabbedd2784eaded2bd6562135f"
	Oct 06 14:40:21 functional-135520 kubelet[14966]: E1006 14:40:21.006389   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:21 functional-135520 kubelet[14966]:         container kube-scheduler start failed in pod kube-scheduler-functional-135520_kube-system(5115bd1eba9594a3f2b99b5d6a4b9d59): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:21 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:21 functional-135520 kubelet[14966]: E1006 14:40:21.006418   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-135520" podUID="5115bd1eba9594a3f2b99b5d6a4b9d59"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520: exit status 2 (308.447756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-135520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (737.00s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-135520 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-135520 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (54.775617ms)

                                                
                                                
** stderr ** 
	E1006 14:40:23.523933  669394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:23.524307  669394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:23.525813  669394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:23.526170  669394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:23.527745  669394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-135520 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-135520
helpers_test.go:243: (dbg) docker inspect functional-135520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	        "Created": "2025-10-06T14:13:32.283355011Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:13:32.318096257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hosts",
	        "LogPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20-json.log",
	        "Name": "/functional-135520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-135520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-135520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	                "LowerDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-135520",
	                "Source": "/var/lib/docker/volumes/functional-135520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-135520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-135520",
	                "name.minikube.sigs.k8s.io": "functional-135520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6368ffca3e5840f94a34614c511d9f0a0a4ca0d05de4fe1f94c8bfdc332f1a62",
	            "SandboxKey": "/var/run/docker/netns/6368ffca3e58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-135520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d1:94:25:38:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f712be59dd18dac98bed5f234c9f77a39e85277143d6f46285adcd3b0185d552",
	                    "EndpointID": "b816964b653b1b5116e3262dfdc87af272931013ef5b9e2714c9ff7357118a6f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-135520",
	                        "3dd9a226ea42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520: exit status 2 (304.155953ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 logs -n 25
helpers_test.go:260: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                            │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                            │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                            │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                               │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                               │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                               │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ delete  │ -p nospam-500584                                                                                              │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ start   │ -p functional-135520 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ start   │ -p functional-135520 --alsologtostderr -v=8                                                                   │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:21 UTC │                     │
	│ cache   │ functional-135520 cache add registry.k8s.io/pause:3.1                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:27 UTC │
	│ cache   │ functional-135520 cache add registry.k8s.io/pause:3.3                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:27 UTC │
	│ cache   │ functional-135520 cache add registry.k8s.io/pause:latest                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:27 UTC │
	│ cache   │ functional-135520 cache add minikube-local-cache-test:functional-135520                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ functional-135520 cache delete minikube-local-cache-test:functional-135520                                    │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl images                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │                     │
	│ cache   │ functional-135520 cache reload                                                                                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ ssh     │ functional-135520 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │ 06 Oct 25 14:28 UTC │
	│ kubectl │ functional-135520 kubectl -- --context functional-135520 get pods                                             │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │                     │
	│ start   │ -p functional-135520 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:28:06
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:28:06.515575  656123 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:28:06.515775  656123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:28:06.515777  656123 out.go:374] Setting ErrFile to fd 2...
	I1006 14:28:06.515780  656123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:28:06.515998  656123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:28:06.516461  656123 out.go:368] Setting JSON to false
	I1006 14:28:06.517416  656123 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18622,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:28:06.517495  656123 start.go:140] virtualization: kvm guest
	I1006 14:28:06.519514  656123 out.go:179] * [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:28:06.520800  656123 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:28:06.520851  656123 notify.go:220] Checking for updates...
	I1006 14:28:06.523025  656123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:28:06.524163  656123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:28:06.525184  656123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:28:06.526184  656123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:28:06.527199  656123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:28:06.528788  656123 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:28:06.528884  656123 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:28:06.553892  656123 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:28:06.554005  656123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:28:06.610913  656123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-06 14:28:06.599957285 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:28:06.611014  656123 docker.go:318] overlay module found
	I1006 14:28:06.612730  656123 out.go:179] * Using the docker driver based on existing profile
	I1006 14:28:06.613792  656123 start.go:304] selected driver: docker
	I1006 14:28:06.613801  656123 start.go:924] validating driver "docker" against &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:28:06.613876  656123 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:28:06.613960  656123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:28:06.672658  656123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-06 14:28:06.663055015 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:28:06.673343  656123 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:28:06.673382  656123 cni.go:84] Creating CNI manager for ""
	I1006 14:28:06.673449  656123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:28:06.673491  656123 start.go:348] cluster config:
	{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:28:06.675542  656123 out.go:179] * Starting "functional-135520" primary control-plane node in "functional-135520" cluster
	I1006 14:28:06.676765  656123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:28:06.678012  656123 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:28:06.679109  656123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:28:06.679148  656123 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:28:06.679171  656123 cache.go:58] Caching tarball of preloaded images
	I1006 14:28:06.679229  656123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:28:06.679315  656123 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:28:06.679322  656123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:28:06.679424  656123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/config.json ...
	I1006 14:28:06.701440  656123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:28:06.701451  656123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:28:06.701470  656123 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:28:06.701500  656123 start.go:360] acquireMachinesLock for functional-135520: {Name:mk634323c4619e77647ac9d9aaca492e399526ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:28:06.701582  656123 start.go:364] duration metric: took 55.883µs to acquireMachinesLock for "functional-135520"
	I1006 14:28:06.701608  656123 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:28:06.701614  656123 fix.go:54] fixHost starting: 
	I1006 14:28:06.701815  656123 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:28:06.719582  656123 fix.go:112] recreateIfNeeded on functional-135520: state=Running err=<nil>
	W1006 14:28:06.719608  656123 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:28:06.721479  656123 out.go:252] * Updating the running docker "functional-135520" container ...
	I1006 14:28:06.721509  656123 machine.go:93] provisionDockerMachine start ...
	I1006 14:28:06.721596  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:06.739776  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:06.740016  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:06.740022  656123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:28:06.883328  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:28:06.883355  656123 ubuntu.go:182] provisioning hostname "functional-135520"
	I1006 14:28:06.883416  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:06.901008  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:06.901274  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:06.901282  656123 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-135520 && echo "functional-135520" | sudo tee /etc/hostname
	I1006 14:28:07.054829  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:28:07.054893  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.073103  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:07.073400  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:07.073412  656123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-135520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-135520/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-135520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:28:07.218044  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:28:07.218064  656123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:28:07.218086  656123 ubuntu.go:190] setting up certificates
	I1006 14:28:07.218097  656123 provision.go:84] configureAuth start
	I1006 14:28:07.218147  656123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:28:07.235320  656123 provision.go:143] copyHostCerts
	I1006 14:28:07.235375  656123 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:28:07.235390  656123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:28:07.235462  656123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:28:07.235557  656123 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:28:07.235561  656123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:28:07.235585  656123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:28:07.235653  656123 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:28:07.235656  656123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:28:07.235685  656123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:28:07.235742  656123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.functional-135520 san=[127.0.0.1 192.168.49.2 functional-135520 localhost minikube]
	I1006 14:28:07.452963  656123 provision.go:177] copyRemoteCerts
	I1006 14:28:07.453021  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:28:07.453058  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.470979  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:07.572166  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:28:07.589268  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 14:28:07.606864  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:28:07.624012  656123 provision.go:87] duration metric: took 405.903097ms to configureAuth
	I1006 14:28:07.624031  656123 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:28:07.624198  656123 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:28:07.624358  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.642129  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:07.642348  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:07.642358  656123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:28:07.930562  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:28:07.930579  656123 machine.go:96] duration metric: took 1.209063221s to provisionDockerMachine
	I1006 14:28:07.930589  656123 start.go:293] postStartSetup for "functional-135520" (driver="docker")
	I1006 14:28:07.930598  656123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:28:07.930651  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:28:07.930687  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.948006  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.049510  656123 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:28:08.053027  656123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:28:08.053042  656123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:28:08.053061  656123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:28:08.053110  656123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:28:08.053177  656123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:28:08.053267  656123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> hosts in /etc/test/nested/copy/629719
	I1006 14:28:08.053298  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/629719
	I1006 14:28:08.060796  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:28:08.077999  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts --> /etc/test/nested/copy/629719/hosts (40 bytes)
	I1006 14:28:08.094766  656123 start.go:296] duration metric: took 164.165544ms for postStartSetup
	I1006 14:28:08.094821  656123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:28:08.094852  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:08.112238  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.210200  656123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:28:08.214744  656123 fix.go:56] duration metric: took 1.513121746s for fixHost
	I1006 14:28:08.214763  656123 start.go:83] releasing machines lock for "functional-135520", held for 1.513172056s
	I1006 14:28:08.214831  656123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:28:08.231996  656123 ssh_runner.go:195] Run: cat /version.json
	I1006 14:28:08.232006  656123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:28:08.232033  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:08.232059  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:08.250015  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.250313  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.415268  656123 ssh_runner.go:195] Run: systemctl --version
	I1006 14:28:08.422068  656123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:28:08.458421  656123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:28:08.463104  656123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:28:08.463164  656123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:28:08.471006  656123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:28:08.471018  656123 start.go:495] detecting cgroup driver to use...
	I1006 14:28:08.471045  656123 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:28:08.471088  656123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:28:08.485271  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:28:08.496859  656123 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:28:08.496895  656123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:28:08.510507  656123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:28:08.522301  656123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:28:08.600902  656123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:28:08.681762  656123 docker.go:234] disabling docker service ...
	I1006 14:28:08.681827  656123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:28:08.696663  656123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:28:08.708614  656123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:28:08.788151  656123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:28:08.872163  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:28:08.884753  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:28:08.898897  656123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:28:08.898940  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.907545  656123 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:28:08.907597  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.916027  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.924428  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.932498  656123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:28:08.939984  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.948324  656123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.956705  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.964969  656123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:28:08.971804  656123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:28:08.978693  656123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:28:09.061389  656123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:28:09.170335  656123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:28:09.170401  656123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:28:09.174497  656123 start.go:563] Will wait 60s for crictl version
	I1006 14:28:09.174546  656123 ssh_runner.go:195] Run: which crictl
	I1006 14:28:09.177947  656123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:28:09.201915  656123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:28:09.201972  656123 ssh_runner.go:195] Run: crio --version
	I1006 14:28:09.230589  656123 ssh_runner.go:195] Run: crio --version
	I1006 14:28:09.260606  656123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:28:09.261947  656123 cli_runner.go:164] Run: docker network inspect functional-135520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:28:09.278672  656123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:28:09.284367  656123 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1006 14:28:09.285382  656123 kubeadm.go:883] updating cluster {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:28:09.285546  656123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:28:09.285603  656123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:28:09.318027  656123 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:28:09.318039  656123 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:28:09.318088  656123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:28:09.342904  656123 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:28:09.342917  656123 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:28:09.342923  656123 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1006 14:28:09.343012  656123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-135520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:28:09.343066  656123 ssh_runner.go:195] Run: crio config
	I1006 14:28:09.388889  656123 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1006 14:28:09.388909  656123 cni.go:84] Creating CNI manager for ""
	I1006 14:28:09.388921  656123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:28:09.388932  656123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:28:09.388955  656123 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-135520 NodeName:functional-135520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:28:09.389087  656123 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-135520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:28:09.389140  656123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:28:09.397400  656123 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:28:09.397454  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:28:09.404846  656123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 14:28:09.416672  656123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:28:09.428910  656123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1006 14:28:09.440961  656123 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:28:09.444714  656123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:28:09.533656  656123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:28:09.546185  656123 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520 for IP: 192.168.49.2
	I1006 14:28:09.546197  656123 certs.go:195] generating shared ca certs ...
	I1006 14:28:09.546290  656123 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:28:09.546440  656123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:28:09.546475  656123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:28:09.546482  656123 certs.go:257] generating profile certs ...
	I1006 14:28:09.546559  656123 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key
	I1006 14:28:09.546594  656123 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key.72a46e8e
	I1006 14:28:09.546623  656123 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key
	I1006 14:28:09.546728  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:28:09.546750  656123 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:28:09.546756  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:28:09.546775  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:28:09.546793  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:28:09.546809  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:28:09.546841  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:28:09.547453  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:28:09.564638  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:28:09.581181  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:28:09.597600  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:28:09.614361  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:28:09.630631  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:28:09.647147  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:28:09.663361  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:28:09.679821  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:28:09.696763  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:28:09.713335  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:28:09.729791  656123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:28:09.741445  656123 ssh_runner.go:195] Run: openssl version
	I1006 14:28:09.747314  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:28:09.755183  656123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:28:09.758724  656123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:28:09.758757  656123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:28:09.792226  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:28:09.799947  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:28:09.808163  656123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:28:09.811711  656123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:28:09.811747  656123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:28:09.845740  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:28:09.854138  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:28:09.862651  656123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:28:09.866319  656123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:28:09.866364  656123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:28:09.900583  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:28:09.908997  656123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:28:09.912812  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:28:09.946819  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:28:09.981139  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:28:10.015748  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:28:10.049705  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:28:10.084715  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:28:10.119782  656123 kubeadm.go:400] StartCluster: {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:28:10.119890  656123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:28:10.119973  656123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:28:10.149719  656123 cri.go:89] found id: ""
	I1006 14:28:10.149774  656123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:28:10.158129  656123 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:28:10.158143  656123 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:28:10.158217  656123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:28:10.166324  656123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.166847  656123 kubeconfig.go:125] found "functional-135520" server: "https://192.168.49.2:8441"
	I1006 14:28:10.168240  656123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:28:10.175929  656123 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-06 14:13:37.047601698 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-06 14:28:09.438461717 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1006 14:28:10.175939  656123 kubeadm.go:1160] stopping kube-system containers ...
	I1006 14:28:10.175953  656123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1006 14:28:10.175996  656123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:28:10.204289  656123 cri.go:89] found id: ""
	I1006 14:28:10.204358  656123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1006 14:28:10.246949  656123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:28:10.255460  656123 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  6 14:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  6 14:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  6 14:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  6 14:17 /etc/kubernetes/scheduler.conf
	
	I1006 14:28:10.255526  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:28:10.263528  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:28:10.271432  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.271482  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:28:10.278844  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:28:10.286462  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.286516  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:28:10.293960  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:28:10.301358  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.301414  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:28:10.308882  656123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:28:10.316879  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:10.360770  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.195064  656123 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.834266287s)
	I1006 14:28:12.195115  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.367120  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.417483  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.470408  656123 api_server.go:52] waiting for apiserver process to appear ...
	I1006 14:28:12.470467  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:12.971496  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:13.471359  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:13.971266  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:14.470628  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:14.970727  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:15.470821  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:15.971537  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:16.470947  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:16.970796  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:17.471324  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:17.970807  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:18.471451  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:18.970803  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:19.471285  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:19.970529  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:20.471499  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:20.971288  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:21.471188  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:21.971466  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:22.471502  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:22.971321  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:23.471284  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:23.970994  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:24.470729  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:24.971445  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:25.470644  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:25.970962  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:26.471442  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:26.971311  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:27.470610  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:27.970961  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:28.470640  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:28.971300  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:29.470626  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:29.971278  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:30.471158  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:30.970980  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:31.470603  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:31.971449  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:32.471177  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:32.970617  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:33.471419  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:33.970722  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:34.471271  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:34.970652  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:35.470921  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:35.971492  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:36.470973  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:36.971256  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:37.471394  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:37.970703  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:38.470961  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:38.970907  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:39.471451  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:39.970850  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:40.471304  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:40.971524  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:41.470744  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:41.971222  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:42.471463  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:42.970604  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:43.470720  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:43.970989  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:44.470818  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:44.970672  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:45.470866  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:45.970683  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:46.471245  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:46.970914  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:47.471423  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:47.971442  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:48.470948  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:48.971501  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:49.471382  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:49.970705  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:50.471271  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:50.971251  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:51.471164  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:51.971336  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:52.471372  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:52.970578  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:53.471263  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:53.971000  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:54.471313  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:54.970838  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:55.470657  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:55.970901  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:56.470732  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:56.971609  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:57.470670  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:57.971054  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:58.470843  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:58.971017  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:59.471644  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:59.970666  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:00.471498  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:00.970805  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:01.471435  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:01.970733  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:02.470885  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:02.970839  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:03.470540  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:03.970872  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:04.470727  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:04.970673  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:05.471322  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:05.970626  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:06.470920  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:06.970887  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:07.471415  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:07.970944  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:08.470610  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:08.971309  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:09.470706  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:09.971450  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:10.471425  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:10.971283  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:11.470937  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:11.970687  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:12.471591  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:12.471676  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:12.498988  656123 cri.go:89] found id: ""
	I1006 14:29:12.499014  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.499021  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:12.499026  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:12.499080  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:12.526057  656123 cri.go:89] found id: ""
	I1006 14:29:12.526074  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.526080  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:12.526085  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:12.526164  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:12.553395  656123 cri.go:89] found id: ""
	I1006 14:29:12.553415  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.553426  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:12.553433  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:12.553486  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:12.580815  656123 cri.go:89] found id: ""
	I1006 14:29:12.580836  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.580846  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:12.580870  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:12.580931  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:12.607516  656123 cri.go:89] found id: ""
	I1006 14:29:12.607533  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.607539  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:12.607544  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:12.607607  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:12.634248  656123 cri.go:89] found id: ""
	I1006 14:29:12.634265  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.634272  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:12.634279  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:12.634335  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:12.660860  656123 cri.go:89] found id: ""
	I1006 14:29:12.660876  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.660883  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:12.660893  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:12.660905  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:12.731400  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:12.731425  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:12.745150  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:12.745174  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:12.803068  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:12.795122    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.795709    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797425    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797887    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.799415    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:12.795122    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.795709    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797425    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797887    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.799415    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:12.803085  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:12.803098  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:12.870066  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:12.870091  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:15.401709  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:15.412675  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:15.412725  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:15.438239  656123 cri.go:89] found id: ""
	I1006 14:29:15.438255  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.438264  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:15.438270  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:15.438322  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:15.463684  656123 cri.go:89] found id: ""
	I1006 14:29:15.463701  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.463709  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:15.463715  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:15.463769  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:15.488259  656123 cri.go:89] found id: ""
	I1006 14:29:15.488276  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.488284  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:15.488289  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:15.488347  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:15.514676  656123 cri.go:89] found id: ""
	I1006 14:29:15.514692  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.514699  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:15.514704  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:15.514762  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:15.540755  656123 cri.go:89] found id: ""
	I1006 14:29:15.540770  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.540776  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:15.540781  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:15.540832  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:15.565570  656123 cri.go:89] found id: ""
	I1006 14:29:15.565588  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.565598  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:15.565604  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:15.565651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:15.591845  656123 cri.go:89] found id: ""
	I1006 14:29:15.591860  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.591876  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:15.591885  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:15.591895  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:15.605051  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:15.605069  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:15.662500  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:15.655240    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.655743    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657283    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657783    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.659338    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:15.655240    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.655743    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657283    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657783    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.659338    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:15.662517  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:15.662531  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:15.727404  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:15.727424  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:15.756261  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:15.756279  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:18.330899  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:18.342312  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:18.342369  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:18.367886  656123 cri.go:89] found id: ""
	I1006 14:29:18.367902  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.367912  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:18.367919  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:18.367967  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:18.394659  656123 cri.go:89] found id: ""
	I1006 14:29:18.394676  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.394685  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:18.394691  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:18.394752  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:18.420739  656123 cri.go:89] found id: ""
	I1006 14:29:18.420762  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.420773  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:18.420780  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:18.420844  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:18.446534  656123 cri.go:89] found id: ""
	I1006 14:29:18.446553  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.446560  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:18.446565  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:18.446610  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:18.474847  656123 cri.go:89] found id: ""
	I1006 14:29:18.474867  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.474876  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:18.474882  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:18.474940  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:18.500739  656123 cri.go:89] found id: ""
	I1006 14:29:18.500755  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.500762  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:18.500767  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:18.500817  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:18.526704  656123 cri.go:89] found id: ""
	I1006 14:29:18.526720  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.526726  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:18.526735  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:18.526749  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:18.594578  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:18.594601  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:18.608090  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:18.608110  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:18.665980  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:18.658366    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.658897    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660516    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660915    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.662586    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:18.658366    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.658897    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660516    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660915    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.662586    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:18.665999  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:18.666015  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:18.726769  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:18.726792  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:21.257561  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:21.269556  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:21.269611  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:21.295967  656123 cri.go:89] found id: ""
	I1006 14:29:21.295989  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.296000  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:21.296007  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:21.296062  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:21.323201  656123 cri.go:89] found id: ""
	I1006 14:29:21.323232  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.323240  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:21.323246  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:21.323297  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:21.352254  656123 cri.go:89] found id: ""
	I1006 14:29:21.352271  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.352277  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:21.352282  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:21.352343  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:21.380457  656123 cri.go:89] found id: ""
	I1006 14:29:21.380477  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.380486  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:21.380493  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:21.380559  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:21.408352  656123 cri.go:89] found id: ""
	I1006 14:29:21.408368  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.408375  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:21.408379  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:21.408435  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:21.434925  656123 cri.go:89] found id: ""
	I1006 14:29:21.434941  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.434948  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:21.434953  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:21.435001  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:21.462533  656123 cri.go:89] found id: ""
	I1006 14:29:21.462551  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.462560  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:21.462570  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:21.462587  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:21.532658  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:21.532682  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:21.547259  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:21.547286  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:21.605779  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:21.598199    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.598802    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600396    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600847    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.602071    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:21.598199    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.598802    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600396    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600847    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.602071    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:21.605799  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:21.605816  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:21.670469  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:21.670493  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:24.203350  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:24.214528  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:24.214576  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:24.241149  656123 cri.go:89] found id: ""
	I1006 14:29:24.241173  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.241182  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:24.241187  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:24.241259  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:24.267072  656123 cri.go:89] found id: ""
	I1006 14:29:24.267089  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.267099  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:24.267104  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:24.267157  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:24.292610  656123 cri.go:89] found id: ""
	I1006 14:29:24.292629  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.292639  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:24.292645  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:24.292694  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:24.318386  656123 cri.go:89] found id: ""
	I1006 14:29:24.318403  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.318409  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:24.318414  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:24.318471  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:24.344804  656123 cri.go:89] found id: ""
	I1006 14:29:24.344827  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.344837  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:24.344843  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:24.344893  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:24.372496  656123 cri.go:89] found id: ""
	I1006 14:29:24.372512  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.372518  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:24.372523  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:24.372569  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:24.397473  656123 cri.go:89] found id: ""
	I1006 14:29:24.397489  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.397495  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:24.397503  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:24.397514  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:24.460002  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:24.460024  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:24.492377  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:24.492394  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:24.558943  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:24.558960  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:24.572667  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:24.572685  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:24.631693  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:24.623841    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.624453    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626057    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626493    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.628013    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:24.623841    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.624453    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626057    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626493    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.628013    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:27.132387  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:27.143350  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:27.143429  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:27.169854  656123 cri.go:89] found id: ""
	I1006 14:29:27.169869  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.169877  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:27.169882  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:27.169930  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:27.196448  656123 cri.go:89] found id: ""
	I1006 14:29:27.196464  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.196471  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:27.196476  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:27.196522  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:27.223046  656123 cri.go:89] found id: ""
	I1006 14:29:27.223066  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.223075  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:27.223081  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:27.223147  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:27.249726  656123 cri.go:89] found id: ""
	I1006 14:29:27.249744  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.249751  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:27.249756  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:27.249810  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:27.277358  656123 cri.go:89] found id: ""
	I1006 14:29:27.277376  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.277391  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:27.277398  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:27.277468  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:27.303432  656123 cri.go:89] found id: ""
	I1006 14:29:27.303452  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.303461  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:27.303467  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:27.303524  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:27.330642  656123 cri.go:89] found id: ""
	I1006 14:29:27.330660  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.330666  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:27.330677  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:27.330692  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:27.360553  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:27.360570  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:27.428526  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:27.428550  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:27.442696  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:27.442720  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:27.500958  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:27.493064    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.493671    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495253    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495769    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.497273    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:27.493064    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.493671    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495253    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495769    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.497273    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:27.500983  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:27.500995  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:30.062974  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:30.074243  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:30.074297  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:30.101939  656123 cri.go:89] found id: ""
	I1006 14:29:30.101960  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.101967  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:30.101973  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:30.102021  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:30.130122  656123 cri.go:89] found id: ""
	I1006 14:29:30.130139  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.130145  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:30.130151  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:30.130229  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:30.157742  656123 cri.go:89] found id: ""
	I1006 14:29:30.157759  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.157767  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:30.157773  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:30.157830  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:30.184613  656123 cri.go:89] found id: ""
	I1006 14:29:30.184634  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.184641  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:30.184646  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:30.184696  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:30.212547  656123 cri.go:89] found id: ""
	I1006 14:29:30.212563  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.212577  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:30.212582  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:30.212631  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:30.240288  656123 cri.go:89] found id: ""
	I1006 14:29:30.240303  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.240310  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:30.240315  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:30.240365  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:30.267014  656123 cri.go:89] found id: ""
	I1006 14:29:30.267030  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.267038  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:30.267047  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:30.267062  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:30.280742  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:30.280768  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:30.340211  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:30.332660    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.333170    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.334689    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.335152    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.336640    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:30.332660    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.333170    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.334689    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.335152    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.336640    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:30.340244  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:30.340259  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:30.401294  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:30.401334  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:30.433250  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:30.433271  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:33.006726  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:33.018059  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:33.018122  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:33.045352  656123 cri.go:89] found id: ""
	I1006 14:29:33.045372  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.045380  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:33.045386  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:33.045436  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:33.072234  656123 cri.go:89] found id: ""
	I1006 14:29:33.072252  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.072260  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:33.072265  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:33.072315  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:33.100162  656123 cri.go:89] found id: ""
	I1006 14:29:33.100178  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.100185  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:33.100190  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:33.100258  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:33.128258  656123 cri.go:89] found id: ""
	I1006 14:29:33.128278  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.128288  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:33.128293  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:33.128342  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:33.155116  656123 cri.go:89] found id: ""
	I1006 14:29:33.155146  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.155153  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:33.155158  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:33.155226  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:33.183135  656123 cri.go:89] found id: ""
	I1006 14:29:33.183150  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.183156  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:33.183161  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:33.183243  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:33.209826  656123 cri.go:89] found id: ""
	I1006 14:29:33.209844  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.209851  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:33.209859  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:33.209870  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:33.276119  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:33.276145  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:33.289780  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:33.289805  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:33.346572  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:33.338882    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.339397    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341034    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341541    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.343088    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:33.338882    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.339397    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341034    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341541    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.343088    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:33.346592  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:33.346605  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:33.413643  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:33.413673  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:35.944641  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:35.955753  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:35.955806  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:35.981909  656123 cri.go:89] found id: ""
	I1006 14:29:35.981923  656123 logs.go:282] 0 containers: []
	W1006 14:29:35.981930  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:35.981935  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:35.981981  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:36.006585  656123 cri.go:89] found id: ""
	I1006 14:29:36.006605  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.006615  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:36.006621  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:36.006687  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:36.034185  656123 cri.go:89] found id: ""
	I1006 14:29:36.034211  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.034221  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:36.034228  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:36.034279  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:36.060600  656123 cri.go:89] found id: ""
	I1006 14:29:36.060618  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.060625  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:36.060630  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:36.060676  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:36.086928  656123 cri.go:89] found id: ""
	I1006 14:29:36.086945  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.086953  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:36.086957  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:36.087073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:36.112833  656123 cri.go:89] found id: ""
	I1006 14:29:36.112851  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.112875  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:36.112882  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:36.112944  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:36.139970  656123 cri.go:89] found id: ""
	I1006 14:29:36.139991  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.140002  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:36.140014  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:36.140030  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:36.153360  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:36.153383  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:36.209902  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:36.202455    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.202929    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.204558    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.205025    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.206599    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:36.202455    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.202929    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.204558    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.205025    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.206599    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:36.209916  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:36.209929  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:36.276242  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:36.276264  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:36.305135  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:36.305152  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:38.872573  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:38.884454  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:38.884512  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:38.911055  656123 cri.go:89] found id: ""
	I1006 14:29:38.911071  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.911076  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:38.911081  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:38.911142  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:38.937413  656123 cri.go:89] found id: ""
	I1006 14:29:38.937433  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.937441  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:38.937450  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:38.937529  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:38.963534  656123 cri.go:89] found id: ""
	I1006 14:29:38.963557  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.963564  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:38.963569  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:38.963619  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:38.989811  656123 cri.go:89] found id: ""
	I1006 14:29:38.989825  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.989831  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:38.989836  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:38.989882  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:39.016789  656123 cri.go:89] found id: ""
	I1006 14:29:39.016809  656123 logs.go:282] 0 containers: []
	W1006 14:29:39.016818  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:39.016824  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:39.016876  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:39.042392  656123 cri.go:89] found id: ""
	I1006 14:29:39.042407  656123 logs.go:282] 0 containers: []
	W1006 14:29:39.042413  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:39.042426  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:39.042473  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:39.068836  656123 cri.go:89] found id: ""
	I1006 14:29:39.068852  656123 logs.go:282] 0 containers: []
	W1006 14:29:39.068859  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:39.068867  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:39.068877  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:39.137663  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:39.137689  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:39.151471  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:39.151495  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:39.209176  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:39.201542    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.202107    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.203710    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.204183    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.205768    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:39.201542    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.202107    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.203710    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.204183    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.205768    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:39.209192  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:39.209218  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:39.274008  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:39.274031  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:41.804322  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:41.815323  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:41.815387  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:41.842055  656123 cri.go:89] found id: ""
	I1006 14:29:41.842070  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.842077  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:41.842082  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:41.842129  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:41.868733  656123 cri.go:89] found id: ""
	I1006 14:29:41.868750  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.868756  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:41.868762  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:41.868809  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:41.896710  656123 cri.go:89] found id: ""
	I1006 14:29:41.896732  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.896742  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:41.896750  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:41.896807  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:41.924854  656123 cri.go:89] found id: ""
	I1006 14:29:41.924875  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.924884  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:41.924891  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:41.924950  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:41.952359  656123 cri.go:89] found id: ""
	I1006 14:29:41.952376  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.952382  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:41.952387  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:41.952453  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:41.979613  656123 cri.go:89] found id: ""
	I1006 14:29:41.979629  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.979636  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:41.979640  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:41.979690  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:42.006904  656123 cri.go:89] found id: ""
	I1006 14:29:42.006923  656123 logs.go:282] 0 containers: []
	W1006 14:29:42.006931  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:42.006941  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:42.006953  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:42.020495  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:42.020518  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:42.078512  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:42.070746    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.071276    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.072881    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.073322    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.074846    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:42.070746    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.071276    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.072881    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.073322    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.074846    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:42.078528  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:42.078543  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:42.143410  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:42.143435  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:42.173024  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:42.173042  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:44.740873  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:44.751791  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:44.751852  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:44.777079  656123 cri.go:89] found id: ""
	I1006 14:29:44.777096  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.777103  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:44.777108  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:44.777158  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:44.802137  656123 cri.go:89] found id: ""
	I1006 14:29:44.802151  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.802158  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:44.802163  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:44.802227  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:44.827942  656123 cri.go:89] found id: ""
	I1006 14:29:44.827957  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.827964  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:44.827970  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:44.828014  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:44.853867  656123 cri.go:89] found id: ""
	I1006 14:29:44.853886  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.853894  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:44.853901  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:44.853956  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:44.879907  656123 cri.go:89] found id: ""
	I1006 14:29:44.879923  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.879931  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:44.879937  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:44.879994  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:44.905634  656123 cri.go:89] found id: ""
	I1006 14:29:44.905654  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.905663  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:44.905673  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:44.905731  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:44.932500  656123 cri.go:89] found id: ""
	I1006 14:29:44.932515  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.932524  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:44.932532  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:44.932543  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:44.960602  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:44.960619  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:45.030445  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:45.030474  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:45.043971  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:45.043991  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:45.101230  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:45.093566    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.094142    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.095685    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.096125    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.097721    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:45.093566    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.094142    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.095685    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.096125    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.097721    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:45.101246  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:45.101259  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:47.666091  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:47.677001  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:47.677061  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:47.703386  656123 cri.go:89] found id: ""
	I1006 14:29:47.703404  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.703412  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:47.703423  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:47.703482  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:47.729961  656123 cri.go:89] found id: ""
	I1006 14:29:47.729978  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.729985  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:47.729998  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:47.730046  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:47.757114  656123 cri.go:89] found id: ""
	I1006 14:29:47.757148  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.757155  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:47.757160  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:47.757220  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:47.783979  656123 cri.go:89] found id: ""
	I1006 14:29:47.783997  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.784004  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:47.784008  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:47.784054  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:47.809265  656123 cri.go:89] found id: ""
	I1006 14:29:47.809280  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.809287  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:47.809292  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:47.809337  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:47.834447  656123 cri.go:89] found id: ""
	I1006 14:29:47.834463  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.834470  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:47.834474  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:47.834518  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:47.860785  656123 cri.go:89] found id: ""
	I1006 14:29:47.860802  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.860808  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:47.860817  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:47.860827  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:47.928576  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:47.928600  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:47.942643  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:47.942669  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:48.000352  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:47.992403    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.992971    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.994566    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.995054    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.996597    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:47.992403    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.992971    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.994566    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.995054    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.996597    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:48.000373  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:48.000391  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:48.065612  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:48.065640  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:50.596504  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:50.607654  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:50.607709  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:50.634723  656123 cri.go:89] found id: ""
	I1006 14:29:50.634742  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.634751  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:50.634758  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:50.634821  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:50.662103  656123 cri.go:89] found id: ""
	I1006 14:29:50.662122  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.662152  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:50.662160  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:50.662232  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:50.688627  656123 cri.go:89] found id: ""
	I1006 14:29:50.688646  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.688653  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:50.688658  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:50.688719  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:50.715511  656123 cri.go:89] found id: ""
	I1006 14:29:50.715530  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.715540  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:50.715544  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:50.715608  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:50.742597  656123 cri.go:89] found id: ""
	I1006 14:29:50.742612  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.742619  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:50.742624  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:50.742671  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:50.769656  656123 cri.go:89] found id: ""
	I1006 14:29:50.769672  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.769679  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:50.769684  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:50.769740  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:50.797585  656123 cri.go:89] found id: ""
	I1006 14:29:50.797603  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.797611  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:50.797620  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:50.797631  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:50.811635  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:50.811664  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:50.870641  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:50.863296    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.863835    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865405    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865832    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.866946    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:50.863296    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.863835    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865405    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865832    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.866946    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:50.870652  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:50.870665  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:50.933617  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:50.933644  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:50.964985  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:50.965003  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:53.535109  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:53.545986  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:53.546039  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:53.571300  656123 cri.go:89] found id: ""
	I1006 14:29:53.571315  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.571322  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:53.571328  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:53.571373  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:53.597111  656123 cri.go:89] found id: ""
	I1006 14:29:53.597126  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.597132  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:53.597137  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:53.597188  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:53.621477  656123 cri.go:89] found id: ""
	I1006 14:29:53.621493  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.621500  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:53.621504  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:53.621550  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:53.647877  656123 cri.go:89] found id: ""
	I1006 14:29:53.647891  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.647898  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:53.647902  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:53.647947  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:53.673269  656123 cri.go:89] found id: ""
	I1006 14:29:53.673284  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.673291  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:53.673296  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:53.673356  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:53.698368  656123 cri.go:89] found id: ""
	I1006 14:29:53.698384  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.698390  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:53.698395  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:53.698446  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:53.724452  656123 cri.go:89] found id: ""
	I1006 14:29:53.724471  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.724481  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:53.724491  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:53.724507  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:53.790937  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:53.790959  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:53.804913  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:53.804929  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:53.862094  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:53.854344    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.854872    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856476    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856953    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.858577    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:53.854344    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.854872    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856476    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856953    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.858577    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:53.862111  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:53.862124  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:53.921847  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:53.921867  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:56.452775  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:56.464702  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:56.464760  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:56.491587  656123 cri.go:89] found id: ""
	I1006 14:29:56.491603  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.491609  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:56.491614  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:56.491662  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:56.517138  656123 cri.go:89] found id: ""
	I1006 14:29:56.517157  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.517166  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:56.517170  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:56.517243  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:56.542713  656123 cri.go:89] found id: ""
	I1006 14:29:56.542728  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.542735  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:56.542740  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:56.542787  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:56.568528  656123 cri.go:89] found id: ""
	I1006 14:29:56.568545  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.568554  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:56.568561  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:56.568616  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:56.593881  656123 cri.go:89] found id: ""
	I1006 14:29:56.593897  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.593904  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:56.593909  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:56.593957  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:56.618843  656123 cri.go:89] found id: ""
	I1006 14:29:56.618862  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.618869  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:56.618874  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:56.618931  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:56.644219  656123 cri.go:89] found id: ""
	I1006 14:29:56.644239  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.644249  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:56.644258  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:56.644270  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:56.701345  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:56.693737    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.694299    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.695864    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.696432    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.697961    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:56.693737    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.694299    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.695864    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.696432    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.697961    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:56.701372  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:56.701384  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:56.762071  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:56.762096  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:56.791634  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:56.791656  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:56.857469  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:56.857492  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:59.371748  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:59.383943  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:59.384004  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:59.411674  656123 cri.go:89] found id: ""
	I1006 14:29:59.411695  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.411703  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:59.411712  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:59.411829  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:59.438177  656123 cri.go:89] found id: ""
	I1006 14:29:59.438193  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.438200  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:59.438217  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:59.438276  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:59.467581  656123 cri.go:89] found id: ""
	I1006 14:29:59.467601  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.467611  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:59.467619  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:59.467682  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:59.496610  656123 cri.go:89] found id: ""
	I1006 14:29:59.496626  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.496633  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:59.496638  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:59.496684  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:59.523799  656123 cri.go:89] found id: ""
	I1006 14:29:59.523815  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.523822  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:59.523827  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:59.523889  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:59.550529  656123 cri.go:89] found id: ""
	I1006 14:29:59.550546  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.550553  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:59.550558  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:59.550606  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:59.577487  656123 cri.go:89] found id: ""
	I1006 14:29:59.577503  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.577509  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:59.577518  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:59.577529  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:59.607238  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:59.607260  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:59.676960  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:59.676986  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:59.690846  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:59.690869  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:59.749311  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:59.741475    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.742053    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.743670    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.744122    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.745515    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:59.741475    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.742053    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.743670    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.744122    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.745515    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:59.749329  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:59.749339  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:02.310264  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:02.321519  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:02.321570  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:02.347821  656123 cri.go:89] found id: ""
	I1006 14:30:02.347842  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.347852  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:02.347860  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:02.347920  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:02.373381  656123 cri.go:89] found id: ""
	I1006 14:30:02.373404  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.373412  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:02.373418  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:02.373462  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:02.401169  656123 cri.go:89] found id: ""
	I1006 14:30:02.401189  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.401199  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:02.401215  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:02.401271  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:02.427774  656123 cri.go:89] found id: ""
	I1006 14:30:02.427790  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.427799  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:02.427806  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:02.427858  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:02.453624  656123 cri.go:89] found id: ""
	I1006 14:30:02.453642  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.453652  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:02.453659  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:02.453725  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:02.480503  656123 cri.go:89] found id: ""
	I1006 14:30:02.480520  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.480526  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:02.480531  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:02.480581  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:02.506624  656123 cri.go:89] found id: ""
	I1006 14:30:02.506643  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.506652  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:02.506662  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:02.506675  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:02.575030  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:02.575055  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:02.589240  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:02.589266  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:02.647840  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:02.640193    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.640759    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642327    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642757    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.644424    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:02.640193    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.640759    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642327    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642757    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.644424    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:02.647855  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:02.647866  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:02.710907  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:02.710932  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:05.243556  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:05.254230  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:05.254287  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:05.279490  656123 cri.go:89] found id: ""
	I1006 14:30:05.279506  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.279514  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:05.279520  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:05.279572  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:05.305513  656123 cri.go:89] found id: ""
	I1006 14:30:05.305533  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.305539  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:05.305544  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:05.305591  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:05.331962  656123 cri.go:89] found id: ""
	I1006 14:30:05.331981  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.331990  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:05.331996  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:05.332058  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:05.357789  656123 cri.go:89] found id: ""
	I1006 14:30:05.357807  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.357815  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:05.357820  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:05.357866  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:05.383637  656123 cri.go:89] found id: ""
	I1006 14:30:05.383658  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.383664  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:05.383669  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:05.383715  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:05.408314  656123 cri.go:89] found id: ""
	I1006 14:30:05.408332  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.408341  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:05.408348  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:05.408418  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:05.433843  656123 cri.go:89] found id: ""
	I1006 14:30:05.433861  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.433867  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:05.433876  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:05.433888  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:05.494147  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:05.494176  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:05.523997  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:05.524016  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:05.591019  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:05.591039  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:05.604531  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:05.604546  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:05.660873  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:05.653677    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.654169    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.655684    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.656053    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.657599    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:05.653677    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.654169    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.655684    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.656053    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.657599    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:08.162635  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:08.173492  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:08.173538  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:08.199879  656123 cri.go:89] found id: ""
	I1006 14:30:08.199896  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.199902  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:08.199907  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:08.199954  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:08.225501  656123 cri.go:89] found id: ""
	I1006 14:30:08.225520  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.225531  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:08.225537  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:08.225598  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:08.251711  656123 cri.go:89] found id: ""
	I1006 14:30:08.251730  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.251737  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:08.251742  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:08.251790  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:08.277559  656123 cri.go:89] found id: ""
	I1006 14:30:08.277575  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.277584  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:08.277594  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:08.277656  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:08.303749  656123 cri.go:89] found id: ""
	I1006 14:30:08.303767  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.303776  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:08.303781  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:08.303830  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:08.329034  656123 cri.go:89] found id: ""
	I1006 14:30:08.329053  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.329059  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:08.329064  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:08.329111  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:08.354393  656123 cri.go:89] found id: ""
	I1006 14:30:08.354409  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.354416  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:08.354423  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:08.354434  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:08.416780  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:08.416799  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:08.444904  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:08.444925  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:08.518089  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:08.518111  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:08.531108  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:08.531124  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:08.586529  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:08.578762    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.579607    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581199    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581663    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.583179    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:08.578762    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.579607    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581199    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581663    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.583179    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:11.087318  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:11.098631  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:11.098701  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:11.125423  656123 cri.go:89] found id: ""
	I1006 14:30:11.125441  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.125450  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:11.125456  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:11.125520  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:11.154785  656123 cri.go:89] found id: ""
	I1006 14:30:11.154803  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.154810  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:11.154815  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:11.154868  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:11.180879  656123 cri.go:89] found id: ""
	I1006 14:30:11.180899  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.180908  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:11.180915  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:11.180979  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:11.207281  656123 cri.go:89] found id: ""
	I1006 14:30:11.207308  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.207318  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:11.207326  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:11.207391  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:11.234275  656123 cri.go:89] found id: ""
	I1006 14:30:11.234293  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.234302  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:11.234308  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:11.234379  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:11.261486  656123 cri.go:89] found id: ""
	I1006 14:30:11.261502  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.261508  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:11.261514  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:11.261561  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:11.287155  656123 cri.go:89] found id: ""
	I1006 14:30:11.287173  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.287180  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:11.287189  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:11.287223  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:11.358359  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:11.358383  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:11.372359  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:11.372385  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:11.430998  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:11.423269    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.423805    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425394    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425911    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.427479    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:11.423269    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.423805    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425394    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425911    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.427479    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:11.431012  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:11.431023  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:11.498514  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:11.498538  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:14.030847  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:14.041715  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:14.041763  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:14.067907  656123 cri.go:89] found id: ""
	I1006 14:30:14.067927  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.067938  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:14.067944  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:14.067992  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:14.093781  656123 cri.go:89] found id: ""
	I1006 14:30:14.093800  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.093810  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:14.093817  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:14.093873  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:14.120737  656123 cri.go:89] found id: ""
	I1006 14:30:14.120752  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.120759  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:14.120765  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:14.120825  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:14.148551  656123 cri.go:89] found id: ""
	I1006 14:30:14.148567  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.148575  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:14.148580  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:14.148632  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:14.174943  656123 cri.go:89] found id: ""
	I1006 14:30:14.174960  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.174965  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:14.174970  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:14.175032  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:14.201148  656123 cri.go:89] found id: ""
	I1006 14:30:14.201163  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.201172  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:14.201178  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:14.201245  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:14.228046  656123 cri.go:89] found id: ""
	I1006 14:30:14.228062  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.228068  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:14.228077  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:14.228087  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:14.300889  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:14.300914  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:14.314304  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:14.314326  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:14.370818  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:14.363282    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.363836    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365383    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365793    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.367329    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:14.363282    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.363836    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365383    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365793    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.367329    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:14.370827  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:14.370838  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:14.431681  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:14.431704  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:16.961397  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:16.973165  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:16.973247  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:17.001273  656123 cri.go:89] found id: ""
	I1006 14:30:17.001291  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.001297  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:17.001302  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:17.001354  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:17.027536  656123 cri.go:89] found id: ""
	I1006 14:30:17.027557  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.027565  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:17.027570  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:17.027622  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:17.054924  656123 cri.go:89] found id: ""
	I1006 14:30:17.054940  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.054947  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:17.054953  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:17.055000  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:17.083443  656123 cri.go:89] found id: ""
	I1006 14:30:17.083460  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.083467  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:17.083472  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:17.083522  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:17.111442  656123 cri.go:89] found id: ""
	I1006 14:30:17.111459  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.111467  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:17.111474  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:17.111530  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:17.138310  656123 cri.go:89] found id: ""
	I1006 14:30:17.138329  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.138338  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:17.138344  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:17.138393  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:17.166360  656123 cri.go:89] found id: ""
	I1006 14:30:17.166389  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.166400  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:17.166411  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:17.166427  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:17.238488  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:17.238516  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:17.252654  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:17.252688  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:17.312602  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:17.304484    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.305059    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.306672    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.307166    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.308768    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:17.304484    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.305059    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.306672    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.307166    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.308768    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:17.312623  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:17.312634  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:17.375185  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:17.375222  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:19.907611  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:19.918724  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:19.918776  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:19.945244  656123 cri.go:89] found id: ""
	I1006 14:30:19.945264  656123 logs.go:282] 0 containers: []
	W1006 14:30:19.945277  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:19.945285  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:19.945343  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:19.972919  656123 cri.go:89] found id: ""
	I1006 14:30:19.972939  656123 logs.go:282] 0 containers: []
	W1006 14:30:19.972949  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:19.972955  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:19.973008  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:19.999841  656123 cri.go:89] found id: ""
	I1006 14:30:19.999858  656123 logs.go:282] 0 containers: []
	W1006 14:30:19.999864  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:19.999870  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:19.999926  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:20.027271  656123 cri.go:89] found id: ""
	I1006 14:30:20.027290  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.027299  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:20.027306  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:20.027364  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:20.054297  656123 cri.go:89] found id: ""
	I1006 14:30:20.054313  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.054320  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:20.054325  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:20.054380  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:20.081354  656123 cri.go:89] found id: ""
	I1006 14:30:20.081374  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.081380  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:20.081386  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:20.081438  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:20.108256  656123 cri.go:89] found id: ""
	I1006 14:30:20.108273  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.108280  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:20.108289  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:20.108303  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:20.177476  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:20.177501  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:20.191396  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:20.191419  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:20.250424  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:20.242535    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.243129    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.244697    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.245110    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.246705    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:20.242535    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.243129    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.244697    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.245110    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.246705    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:20.250437  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:20.250448  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:20.311404  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:20.311430  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:22.842482  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:22.854386  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:22.854451  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:22.882144  656123 cri.go:89] found id: ""
	I1006 14:30:22.882160  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.882167  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:22.882176  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:22.882244  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:22.908078  656123 cri.go:89] found id: ""
	I1006 14:30:22.908097  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.908106  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:22.908112  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:22.908163  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:22.934596  656123 cri.go:89] found id: ""
	I1006 14:30:22.934613  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.934620  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:22.934624  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:22.934673  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:22.961803  656123 cri.go:89] found id: ""
	I1006 14:30:22.961821  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.961830  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:22.961837  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:22.961889  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:22.988277  656123 cri.go:89] found id: ""
	I1006 14:30:22.988293  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.988300  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:22.988305  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:22.988355  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:23.015411  656123 cri.go:89] found id: ""
	I1006 14:30:23.015428  656123 logs.go:282] 0 containers: []
	W1006 14:30:23.015436  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:23.015441  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:23.015494  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:23.042508  656123 cri.go:89] found id: ""
	I1006 14:30:23.042526  656123 logs.go:282] 0 containers: []
	W1006 14:30:23.042534  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:23.042545  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:23.042558  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:23.110932  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:23.110957  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:23.125294  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:23.125322  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:23.185388  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:23.177268    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.177825    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179508    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179961    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.181496    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:23.177268    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.177825    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179508    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179961    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.181496    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:23.185405  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:23.185418  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:23.246673  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:23.246696  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:25.778383  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:25.789490  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:25.789539  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:25.816713  656123 cri.go:89] found id: ""
	I1006 14:30:25.816731  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.816737  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:25.816742  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:25.816792  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:25.844676  656123 cri.go:89] found id: ""
	I1006 14:30:25.844699  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.844708  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:25.844716  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:25.844784  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:25.872027  656123 cri.go:89] found id: ""
	I1006 14:30:25.872046  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.872054  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:25.872059  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:25.872115  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:25.898454  656123 cri.go:89] found id: ""
	I1006 14:30:25.898473  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.898480  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:25.898486  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:25.898548  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:25.926559  656123 cri.go:89] found id: ""
	I1006 14:30:25.926576  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.926583  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:25.926589  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:25.926638  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:25.953516  656123 cri.go:89] found id: ""
	I1006 14:30:25.953535  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.953544  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:25.953562  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:25.953634  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:25.980962  656123 cri.go:89] found id: ""
	I1006 14:30:25.980978  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.980986  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:25.980994  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:25.981012  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:26.052486  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:26.052510  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:26.066688  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:26.066710  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:26.126899  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:26.118941    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.119633    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121265    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121767    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.123331    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:26.118941    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.119633    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121265    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121767    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.123331    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:26.126912  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:26.126924  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:26.187018  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:26.187047  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:28.721028  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:28.732295  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:28.732361  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:28.759561  656123 cri.go:89] found id: ""
	I1006 14:30:28.759583  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.759592  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:28.759598  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:28.759651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:28.787553  656123 cri.go:89] found id: ""
	I1006 14:30:28.787573  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.787584  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:28.787598  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:28.787653  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:28.813499  656123 cri.go:89] found id: ""
	I1006 14:30:28.813520  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.813529  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:28.813535  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:28.813591  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:28.840441  656123 cri.go:89] found id: ""
	I1006 14:30:28.840462  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.840468  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:28.840474  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:28.840523  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:28.867632  656123 cri.go:89] found id: ""
	I1006 14:30:28.867647  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.867654  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:28.867659  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:28.867709  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:28.895005  656123 cri.go:89] found id: ""
	I1006 14:30:28.895023  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.895029  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:28.895034  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:28.895082  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:28.920965  656123 cri.go:89] found id: ""
	I1006 14:30:28.920983  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.920993  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:28.921003  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:28.921017  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:28.981278  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:28.981302  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:29.010983  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:29.011000  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:29.078541  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:29.078565  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:29.092586  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:29.092613  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:29.151129  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:29.143937    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.144542    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146112    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146650    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.147708    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:29.143937    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.144542    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146112    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146650    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.147708    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:31.652214  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:31.663823  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:31.663891  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:31.690576  656123 cri.go:89] found id: ""
	I1006 14:30:31.690596  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.690606  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:31.690613  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:31.690666  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:31.716874  656123 cri.go:89] found id: ""
	I1006 14:30:31.716894  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.716902  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:31.716907  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:31.716956  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:31.744572  656123 cri.go:89] found id: ""
	I1006 14:30:31.744594  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.744603  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:31.744611  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:31.744681  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:31.771539  656123 cri.go:89] found id: ""
	I1006 14:30:31.771556  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.771565  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:31.771575  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:31.771637  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:31.798102  656123 cri.go:89] found id: ""
	I1006 14:30:31.798118  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.798125  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:31.798131  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:31.798175  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:31.825905  656123 cri.go:89] found id: ""
	I1006 14:30:31.825921  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.825928  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:31.825933  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:31.825985  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:31.853474  656123 cri.go:89] found id: ""
	I1006 14:30:31.853489  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.853496  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:31.853504  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:31.853515  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:31.925541  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:31.925566  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:31.939650  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:31.939676  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:31.998586  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:31.990853   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.991461   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.992961   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.993424   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.994933   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:31.990853   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.991461   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.992961   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.993424   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.994933   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:31.998595  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:31.998606  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:32.058322  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:32.058348  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:34.591129  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:34.602495  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:34.602545  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:34.628973  656123 cri.go:89] found id: ""
	I1006 14:30:34.628991  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.628998  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:34.629003  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:34.629048  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:34.654917  656123 cri.go:89] found id: ""
	I1006 14:30:34.654934  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.654941  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:34.654945  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:34.654997  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:34.680385  656123 cri.go:89] found id: ""
	I1006 14:30:34.680401  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.680408  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:34.680413  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:34.680459  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:34.705914  656123 cri.go:89] found id: ""
	I1006 14:30:34.705929  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.705935  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:34.705940  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:34.705989  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:34.731580  656123 cri.go:89] found id: ""
	I1006 14:30:34.731597  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.731604  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:34.731609  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:34.731661  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:34.756200  656123 cri.go:89] found id: ""
	I1006 14:30:34.756232  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.756239  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:34.756244  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:34.756293  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:34.781770  656123 cri.go:89] found id: ""
	I1006 14:30:34.781785  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.781794  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:34.781802  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:34.781813  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:34.850861  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:34.850884  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:34.864688  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:34.864706  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:34.921713  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:34.914358   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.914917   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916495   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916918   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.918459   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:34.914358   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.914917   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916495   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916918   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.918459   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:34.921723  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:34.921733  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:34.985884  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:34.985906  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:37.516053  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:37.526705  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:37.526751  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:37.551472  656123 cri.go:89] found id: ""
	I1006 14:30:37.551490  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.551500  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:37.551507  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:37.551561  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:37.576603  656123 cri.go:89] found id: ""
	I1006 14:30:37.576619  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.576626  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:37.576630  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:37.576674  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:37.602217  656123 cri.go:89] found id: ""
	I1006 14:30:37.602241  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.602250  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:37.602254  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:37.602300  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:37.627547  656123 cri.go:89] found id: ""
	I1006 14:30:37.627561  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.627567  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:37.627572  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:37.627614  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:37.652434  656123 cri.go:89] found id: ""
	I1006 14:30:37.652451  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.652460  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:37.652467  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:37.652519  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:37.677543  656123 cri.go:89] found id: ""
	I1006 14:30:37.677558  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.677564  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:37.677569  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:37.677611  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:37.701695  656123 cri.go:89] found id: ""
	I1006 14:30:37.701711  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.701718  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:37.701727  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:37.701737  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:37.730832  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:37.730852  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:37.799686  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:37.799708  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:37.813081  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:37.813106  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:37.869274  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:37.861812   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.862406   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.863958   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.864398   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.865877   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:37.861812   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.862406   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.863958   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.864398   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.865877   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:37.869285  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:37.869297  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:40.432488  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:40.443779  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:40.443830  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:40.471502  656123 cri.go:89] found id: ""
	I1006 14:30:40.471520  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.471528  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:40.471533  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:40.471591  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:40.498418  656123 cri.go:89] found id: ""
	I1006 14:30:40.498435  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.498442  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:40.498447  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:40.498495  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:40.525987  656123 cri.go:89] found id: ""
	I1006 14:30:40.526003  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.526009  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:40.526015  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:40.526073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:40.554161  656123 cri.go:89] found id: ""
	I1006 14:30:40.554180  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.554190  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:40.554197  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:40.554262  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:40.581168  656123 cri.go:89] found id: ""
	I1006 14:30:40.581186  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.581193  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:40.581198  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:40.581272  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:40.608862  656123 cri.go:89] found id: ""
	I1006 14:30:40.608879  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.608890  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:40.608899  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:40.608951  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:40.636053  656123 cri.go:89] found id: ""
	I1006 14:30:40.636069  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.636076  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:40.636084  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:40.636096  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:40.649832  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:40.649854  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:40.708143  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:40.700302   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.700800   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702328   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702794   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.704437   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:40.700302   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.700800   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702328   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702794   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.704437   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:40.708157  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:40.708173  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:40.767571  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:40.767598  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:40.798425  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:40.798447  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:43.369172  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:43.380275  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:43.380336  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:43.407137  656123 cri.go:89] found id: ""
	I1006 14:30:43.407166  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.407172  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:43.407178  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:43.407255  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:43.434264  656123 cri.go:89] found id: ""
	I1006 14:30:43.434280  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.434286  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:43.434291  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:43.434344  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:43.460492  656123 cri.go:89] found id: ""
	I1006 14:30:43.460511  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.460521  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:43.460527  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:43.460579  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:43.486096  656123 cri.go:89] found id: ""
	I1006 14:30:43.486112  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.486118  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:43.486123  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:43.486180  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:43.512166  656123 cri.go:89] found id: ""
	I1006 14:30:43.512182  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.512189  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:43.512200  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:43.512274  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:43.540182  656123 cri.go:89] found id: ""
	I1006 14:30:43.540198  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.540225  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:43.540231  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:43.540281  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:43.566257  656123 cri.go:89] found id: ""
	I1006 14:30:43.566276  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.566283  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:43.566291  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:43.566301  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:43.633282  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:43.633308  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:43.646525  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:43.646547  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:43.703245  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:43.695412   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.695958   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.697564   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.698089   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.699634   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:43.695412   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.695958   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.697564   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.698089   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.699634   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:43.703258  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:43.703271  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:43.763009  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:43.763030  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:46.294610  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:46.306608  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:46.306657  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:46.333990  656123 cri.go:89] found id: ""
	I1006 14:30:46.334010  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.334017  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:46.334023  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:46.334071  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:46.360169  656123 cri.go:89] found id: ""
	I1006 14:30:46.360186  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.360193  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:46.360197  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:46.360274  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:46.386526  656123 cri.go:89] found id: ""
	I1006 14:30:46.386543  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.386552  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:46.386559  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:46.386618  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:46.412732  656123 cri.go:89] found id: ""
	I1006 14:30:46.412755  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.412761  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:46.412768  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:46.412819  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:46.437943  656123 cri.go:89] found id: ""
	I1006 14:30:46.437961  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.437969  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:46.437975  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:46.438022  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:46.462227  656123 cri.go:89] found id: ""
	I1006 14:30:46.462245  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.462254  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:46.462259  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:46.462308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:46.486426  656123 cri.go:89] found id: ""
	I1006 14:30:46.486446  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.486455  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:46.486465  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:46.486478  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:46.555804  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:46.555824  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:46.568953  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:46.568977  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:46.625518  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:46.616895   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618433   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618998   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.620647   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.621154   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:46.616895   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618433   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618998   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.620647   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.621154   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:46.625532  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:46.625542  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:46.689026  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:46.689045  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:49.220452  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:49.231376  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:49.231437  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:49.257464  656123 cri.go:89] found id: ""
	I1006 14:30:49.257484  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.257492  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:49.257499  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:49.257549  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:49.282291  656123 cri.go:89] found id: ""
	I1006 14:30:49.282305  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.282315  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:49.282322  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:49.282374  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:49.307787  656123 cri.go:89] found id: ""
	I1006 14:30:49.307806  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.307815  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:49.307821  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:49.307872  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:49.333154  656123 cri.go:89] found id: ""
	I1006 14:30:49.333172  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.333179  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:49.333185  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:49.333252  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:49.359161  656123 cri.go:89] found id: ""
	I1006 14:30:49.359175  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.359183  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:49.359188  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:49.359253  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:49.385380  656123 cri.go:89] found id: ""
	I1006 14:30:49.385398  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.385405  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:49.385410  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:49.385461  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:49.409982  656123 cri.go:89] found id: ""
	I1006 14:30:49.410009  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.410020  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:49.410030  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:49.410043  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:49.470637  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:49.470662  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:49.498568  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:49.498585  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:49.568338  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:49.568355  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:49.581842  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:49.581863  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:49.638518  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:49.631016   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.631575   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633164   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633595   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.635088   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:49.631016   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.631575   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633164   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633595   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.635088   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:52.139121  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:52.151341  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:52.151400  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:52.180909  656123 cri.go:89] found id: ""
	I1006 14:30:52.180929  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.180937  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:52.180943  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:52.181004  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:52.212664  656123 cri.go:89] found id: ""
	I1006 14:30:52.212687  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.212695  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:52.212700  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:52.212753  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:52.242804  656123 cri.go:89] found id: ""
	I1006 14:30:52.242824  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.242833  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:52.242840  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:52.242906  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:52.275408  656123 cri.go:89] found id: ""
	I1006 14:30:52.275428  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.275437  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:52.275443  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:52.275511  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:52.304772  656123 cri.go:89] found id: ""
	I1006 14:30:52.304791  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.304797  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:52.304802  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:52.304855  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:52.334628  656123 cri.go:89] found id: ""
	I1006 14:30:52.334646  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.334665  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:52.334672  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:52.334744  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:52.363535  656123 cri.go:89] found id: ""
	I1006 14:30:52.363551  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.363558  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:52.363567  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:52.363578  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:52.395148  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:52.395172  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:52.467790  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:52.467818  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:52.483589  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:52.483613  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:52.547153  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:52.538900   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.539522   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541194   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541724   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.543496   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:52.538900   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.539522   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541194   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541724   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.543496   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:52.547168  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:52.547191  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:55.111539  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:55.123376  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:55.123432  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:55.151263  656123 cri.go:89] found id: ""
	I1006 14:30:55.151278  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.151285  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:55.151289  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:55.151354  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:55.179099  656123 cri.go:89] found id: ""
	I1006 14:30:55.179116  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.179123  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:55.179127  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:55.179177  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:55.207568  656123 cri.go:89] found id: ""
	I1006 14:30:55.207586  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.207594  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:55.207599  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:55.207653  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:55.236037  656123 cri.go:89] found id: ""
	I1006 14:30:55.236058  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.236068  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:55.236075  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:55.236132  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:55.263286  656123 cri.go:89] found id: ""
	I1006 14:30:55.263304  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.263311  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:55.263316  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:55.263416  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:55.291167  656123 cri.go:89] found id: ""
	I1006 14:30:55.291189  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.291197  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:55.291217  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:55.291271  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:55.318410  656123 cri.go:89] found id: ""
	I1006 14:30:55.318430  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.318440  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:55.318450  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:55.318461  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:55.385160  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:55.385187  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:55.399050  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:55.399076  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:55.458418  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:55.450518   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.451123   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.452726   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.453351   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.454908   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:55.450518   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.451123   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.452726   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.453351   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.454908   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:55.458432  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:55.458448  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:55.524792  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:55.524816  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:58.057888  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:58.068966  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:58.069020  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:58.096398  656123 cri.go:89] found id: ""
	I1006 14:30:58.096415  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.096423  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:58.096428  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:58.096477  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:58.123183  656123 cri.go:89] found id: ""
	I1006 14:30:58.123199  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.123218  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:58.123225  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:58.123278  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:58.149129  656123 cri.go:89] found id: ""
	I1006 14:30:58.149145  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.149152  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:58.149156  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:58.149231  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:58.176154  656123 cri.go:89] found id: ""
	I1006 14:30:58.176171  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.176178  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:58.176183  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:58.176260  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:58.202224  656123 cri.go:89] found id: ""
	I1006 14:30:58.202244  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.202252  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:58.202257  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:58.202308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:58.228701  656123 cri.go:89] found id: ""
	I1006 14:30:58.228722  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.228731  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:58.228738  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:58.228789  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:58.255405  656123 cri.go:89] found id: ""
	I1006 14:30:58.255424  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.255434  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:58.255445  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:58.255463  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:58.326378  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:58.326403  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:58.340088  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:58.340113  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:58.398424  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:58.390470   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.391705   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.392182   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.393789   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.394272   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:58.390470   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.391705   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.392182   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.393789   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.394272   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:58.398434  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:58.398444  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:58.458532  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:58.458557  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:00.988890  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:01.000117  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:01.000187  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:01.027975  656123 cri.go:89] found id: ""
	I1006 14:31:01.027994  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.028005  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:01.028011  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:01.028073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:01.057671  656123 cri.go:89] found id: ""
	I1006 14:31:01.057689  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.057695  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:01.057703  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:01.057753  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:01.086296  656123 cri.go:89] found id: ""
	I1006 14:31:01.086312  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.086319  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:01.086324  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:01.086380  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:01.115804  656123 cri.go:89] found id: ""
	I1006 14:31:01.115828  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.115838  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:01.115846  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:01.115914  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:01.143626  656123 cri.go:89] found id: ""
	I1006 14:31:01.143652  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.143662  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:01.143669  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:01.143730  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:01.173329  656123 cri.go:89] found id: ""
	I1006 14:31:01.173351  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.173358  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:01.173363  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:01.173425  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:01.202447  656123 cri.go:89] found id: ""
	I1006 14:31:01.202464  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.202472  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:01.202481  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:01.202493  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:01.264676  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:01.255680   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.256306   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.258878   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.259545   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.261098   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:01.255680   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.256306   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.258878   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.259545   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.261098   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:01.264688  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:01.264701  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:01.325726  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:01.325755  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:01.357935  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:01.357956  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:01.426320  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:01.426346  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:03.942695  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:03.954165  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:03.954257  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:03.982933  656123 cri.go:89] found id: ""
	I1006 14:31:03.982952  656123 logs.go:282] 0 containers: []
	W1006 14:31:03.982960  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:03.982966  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:03.983023  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:04.010750  656123 cri.go:89] found id: ""
	I1006 14:31:04.010768  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.010775  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:04.010780  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:04.010845  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:04.038408  656123 cri.go:89] found id: ""
	I1006 14:31:04.038430  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.038440  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:04.038446  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:04.038506  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:04.065987  656123 cri.go:89] found id: ""
	I1006 14:31:04.066004  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.066011  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:04.066017  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:04.066064  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:04.092615  656123 cri.go:89] found id: ""
	I1006 14:31:04.092635  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.092645  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:04.092651  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:04.092715  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:04.120296  656123 cri.go:89] found id: ""
	I1006 14:31:04.120314  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.120324  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:04.120331  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:04.120392  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:04.148258  656123 cri.go:89] found id: ""
	I1006 14:31:04.148275  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.148282  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:04.148291  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:04.148303  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:04.162693  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:04.162716  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:04.222565  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:04.214872   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.215499   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.216999   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.217486   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.218767   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:04.214872   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.215499   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.216999   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.217486   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.218767   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:04.222576  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:04.222588  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:04.284619  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:04.284645  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:04.315049  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:04.315067  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:06.880125  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:06.891035  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:06.891100  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:06.919022  656123 cri.go:89] found id: ""
	I1006 14:31:06.919039  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.919054  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:06.919059  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:06.919109  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:06.945007  656123 cri.go:89] found id: ""
	I1006 14:31:06.945023  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.945030  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:06.945035  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:06.945082  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:06.971114  656123 cri.go:89] found id: ""
	I1006 14:31:06.971140  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.971150  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:06.971156  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:06.971219  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:06.997325  656123 cri.go:89] found id: ""
	I1006 14:31:06.997341  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.997349  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:06.997354  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:06.997399  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:07.024483  656123 cri.go:89] found id: ""
	I1006 14:31:07.024503  656123 logs.go:282] 0 containers: []
	W1006 14:31:07.024510  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:07.024515  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:07.024563  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:07.050897  656123 cri.go:89] found id: ""
	I1006 14:31:07.050916  656123 logs.go:282] 0 containers: []
	W1006 14:31:07.050924  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:07.050929  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:07.050988  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:07.076681  656123 cri.go:89] found id: ""
	I1006 14:31:07.076698  656123 logs.go:282] 0 containers: []
	W1006 14:31:07.076706  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:07.076716  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:07.076730  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:07.137015  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:07.137039  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:07.167691  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:07.167711  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:07.236752  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:07.236774  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:07.250497  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:07.250519  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:07.307410  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:07.299651   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.300252   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.301817   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.302267   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.303782   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:07.299651   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.300252   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.301817   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.302267   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.303782   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:09.809076  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:09.819941  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:09.819991  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:09.847047  656123 cri.go:89] found id: ""
	I1006 14:31:09.847066  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.847075  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:09.847082  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:09.847151  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:09.873840  656123 cri.go:89] found id: ""
	I1006 14:31:09.873856  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.873862  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:09.873867  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:09.873923  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:09.900892  656123 cri.go:89] found id: ""
	I1006 14:31:09.900908  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.900914  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:09.900920  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:09.900967  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:09.927801  656123 cri.go:89] found id: ""
	I1006 14:31:09.927822  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.927835  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:09.927842  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:09.927892  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:09.955400  656123 cri.go:89] found id: ""
	I1006 14:31:09.955420  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.955428  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:09.955433  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:09.955484  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:09.981624  656123 cri.go:89] found id: ""
	I1006 14:31:09.981640  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.981647  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:09.981653  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:09.981700  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:10.009693  656123 cri.go:89] found id: ""
	I1006 14:31:10.009710  656123 logs.go:282] 0 containers: []
	W1006 14:31:10.009716  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:10.009724  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:10.009735  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:10.075460  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:10.075492  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:10.089300  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:10.089327  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:10.148123  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:10.140282   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.140860   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142433   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142866   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.144460   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:10.140282   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.140860   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142433   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142866   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.144460   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:10.148152  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:10.148165  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:10.210442  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:10.210473  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:12.742692  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:12.754226  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:12.754289  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:12.783228  656123 cri.go:89] found id: ""
	I1006 14:31:12.783249  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.783256  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:12.783263  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:12.783324  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:12.811693  656123 cri.go:89] found id: ""
	I1006 14:31:12.811715  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.811725  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:12.811732  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:12.811782  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:12.840310  656123 cri.go:89] found id: ""
	I1006 14:31:12.840332  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.840342  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:12.840348  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:12.840402  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:12.869101  656123 cri.go:89] found id: ""
	I1006 14:31:12.869123  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.869131  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:12.869137  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:12.869189  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:12.897605  656123 cri.go:89] found id: ""
	I1006 14:31:12.897623  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.897630  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:12.897635  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:12.897693  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:12.926227  656123 cri.go:89] found id: ""
	I1006 14:31:12.926247  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.926254  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:12.926260  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:12.926308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:12.955298  656123 cri.go:89] found id: ""
	I1006 14:31:12.955315  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.955324  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:12.955334  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:12.955348  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:13.021936  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:13.021962  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:13.036093  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:13.036115  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:13.096234  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:13.088298   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.088908   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090517   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090973   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.092543   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:13.088298   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.088908   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090517   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090973   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.092543   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:13.096246  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:13.096258  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:13.156934  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:13.156960  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:15.689959  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:15.701228  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:15.701301  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:15.727030  656123 cri.go:89] found id: ""
	I1006 14:31:15.727050  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.727059  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:15.727067  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:15.727119  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:15.753392  656123 cri.go:89] found id: ""
	I1006 14:31:15.753409  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.753417  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:15.753421  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:15.753471  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:15.780750  656123 cri.go:89] found id: ""
	I1006 14:31:15.780775  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.780783  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:15.780788  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:15.780842  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:15.807372  656123 cri.go:89] found id: ""
	I1006 14:31:15.807388  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.807401  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:15.807406  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:15.807461  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:15.834188  656123 cri.go:89] found id: ""
	I1006 14:31:15.834222  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.834233  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:15.834240  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:15.834293  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:15.861606  656123 cri.go:89] found id: ""
	I1006 14:31:15.861624  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.861631  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:15.861636  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:15.861702  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:15.888991  656123 cri.go:89] found id: ""
	I1006 14:31:15.889007  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.889014  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:15.889022  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:15.889035  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:15.956002  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:15.956024  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:15.969830  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:15.969850  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:16.026629  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:16.019009   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.019537   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021047   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021513   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.023044   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:16.019009   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.019537   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021047   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021513   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.023044   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:16.026643  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:16.026656  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:16.085192  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:16.085220  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:18.616289  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:18.627239  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:18.627304  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:18.655298  656123 cri.go:89] found id: ""
	I1006 14:31:18.655318  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.655327  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:18.655334  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:18.655392  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:18.682590  656123 cri.go:89] found id: ""
	I1006 14:31:18.682609  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.682616  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:18.682623  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:18.682684  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:18.709329  656123 cri.go:89] found id: ""
	I1006 14:31:18.709349  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.709359  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:18.709366  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:18.709428  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:18.735272  656123 cri.go:89] found id: ""
	I1006 14:31:18.735292  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.735302  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:18.735309  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:18.735370  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:18.761956  656123 cri.go:89] found id: ""
	I1006 14:31:18.761973  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.761980  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:18.761984  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:18.762047  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:18.788186  656123 cri.go:89] found id: ""
	I1006 14:31:18.788224  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.788234  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:18.788241  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:18.788293  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:18.814751  656123 cri.go:89] found id: ""
	I1006 14:31:18.814768  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.814775  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:18.814783  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:18.814793  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:18.874634  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:18.867140   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.867734   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869314   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869766   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.871291   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:18.867140   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.867734   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869314   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869766   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.871291   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:18.874645  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:18.874658  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:18.934741  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:18.934765  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:18.964835  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:18.964857  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:19.034348  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:19.034372  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:21.549097  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:21.560431  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:21.560497  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:21.588270  656123 cri.go:89] found id: ""
	I1006 14:31:21.588285  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.588292  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:21.588297  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:21.588352  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:21.615501  656123 cri.go:89] found id: ""
	I1006 14:31:21.615519  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.615527  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:21.615532  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:21.615590  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:21.643122  656123 cri.go:89] found id: ""
	I1006 14:31:21.643143  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.643150  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:21.643154  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:21.643222  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:21.670611  656123 cri.go:89] found id: ""
	I1006 14:31:21.670628  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.670635  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:21.670642  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:21.670705  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:21.698443  656123 cri.go:89] found id: ""
	I1006 14:31:21.698460  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.698467  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:21.698472  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:21.698521  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:21.726957  656123 cri.go:89] found id: ""
	I1006 14:31:21.726973  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.726981  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:21.726986  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:21.727032  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:21.754606  656123 cri.go:89] found id: ""
	I1006 14:31:21.754628  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.754638  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:21.754648  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:21.754661  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:21.814709  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:21.814731  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:21.846526  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:21.846543  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:21.915125  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:21.915156  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:21.929444  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:21.929482  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:21.988239  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:21.980740   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.981329   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.982927   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.983357   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.984775   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:21.980740   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.981329   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.982927   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.983357   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.984775   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:24.489339  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:24.500246  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:24.500303  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:24.527224  656123 cri.go:89] found id: ""
	I1006 14:31:24.527243  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.527253  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:24.527258  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:24.527309  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:24.552540  656123 cri.go:89] found id: ""
	I1006 14:31:24.552559  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.552567  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:24.552573  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:24.552636  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:24.581110  656123 cri.go:89] found id: ""
	I1006 14:31:24.581125  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.581131  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:24.581138  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:24.581201  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:24.607563  656123 cri.go:89] found id: ""
	I1006 14:31:24.607580  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.607588  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:24.607592  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:24.607649  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:24.633221  656123 cri.go:89] found id: ""
	I1006 14:31:24.633241  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.633249  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:24.633255  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:24.633303  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:24.658521  656123 cri.go:89] found id: ""
	I1006 14:31:24.658540  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.658547  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:24.658552  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:24.658611  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:24.684336  656123 cri.go:89] found id: ""
	I1006 14:31:24.684351  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.684358  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:24.684367  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:24.684381  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:24.743258  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:24.735488   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.735921   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.737653   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.738173   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.739491   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:24.735488   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.735921   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.737653   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.738173   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.739491   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:24.743270  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:24.743283  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:24.802373  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:24.802398  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:24.832699  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:24.832716  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:24.898746  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:24.898768  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:27.413617  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:27.424393  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:27.424454  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:27.452153  656123 cri.go:89] found id: ""
	I1006 14:31:27.452173  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.452181  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:27.452186  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:27.452268  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:27.477797  656123 cri.go:89] found id: ""
	I1006 14:31:27.477815  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.477822  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:27.477827  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:27.477881  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:27.502952  656123 cri.go:89] found id: ""
	I1006 14:31:27.502971  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.502978  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:27.502983  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:27.503039  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:27.529416  656123 cri.go:89] found id: ""
	I1006 14:31:27.529433  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.529440  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:27.529444  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:27.529504  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:27.554632  656123 cri.go:89] found id: ""
	I1006 14:31:27.554651  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.554659  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:27.554664  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:27.554713  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:27.580924  656123 cri.go:89] found id: ""
	I1006 14:31:27.580942  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.580948  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:27.580954  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:27.581007  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:27.605807  656123 cri.go:89] found id: ""
	I1006 14:31:27.605826  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.605836  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:27.605846  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:27.605860  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:27.618904  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:27.618922  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:27.677305  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:27.669937   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.670557   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672091   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672543   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.673638   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:27.669937   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.670557   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672091   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672543   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.673638   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:27.677315  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:27.677326  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:27.739103  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:27.739125  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:27.767028  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:27.767049  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:30.336333  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:30.348665  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:30.348724  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:30.377945  656123 cri.go:89] found id: ""
	I1006 14:31:30.377963  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.377973  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:30.377979  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:30.378035  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:30.406369  656123 cri.go:89] found id: ""
	I1006 14:31:30.406391  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.406400  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:30.406407  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:30.406484  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:30.435610  656123 cri.go:89] found id: ""
	I1006 14:31:30.435634  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.435644  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:30.435650  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:30.435715  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:30.464182  656123 cri.go:89] found id: ""
	I1006 14:31:30.464201  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.464222  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:30.464230  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:30.464285  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:30.493191  656123 cri.go:89] found id: ""
	I1006 14:31:30.493237  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.493254  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:30.493260  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:30.493313  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:30.522664  656123 cri.go:89] found id: ""
	I1006 14:31:30.522684  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.522695  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:30.522702  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:30.522762  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:30.553858  656123 cri.go:89] found id: ""
	I1006 14:31:30.553874  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.553880  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:30.553891  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:30.553905  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:30.625537  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:30.625563  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:30.641100  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:30.641127  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:30.705527  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:30.696933   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.697691   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699345   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699934   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.701560   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:30.696933   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.697691   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699345   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699934   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.701560   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:30.705543  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:30.705560  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:30.768236  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:30.768263  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:33.302531  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:33.314251  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:33.314308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:33.343374  656123 cri.go:89] found id: ""
	I1006 14:31:33.343394  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.343404  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:33.343411  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:33.343491  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:33.369870  656123 cri.go:89] found id: ""
	I1006 14:31:33.369885  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.369891  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:33.369895  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:33.369944  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:33.394611  656123 cri.go:89] found id: ""
	I1006 14:31:33.394631  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.394640  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:33.394647  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:33.394696  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:33.420323  656123 cri.go:89] found id: ""
	I1006 14:31:33.420338  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.420345  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:33.420350  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:33.420399  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:33.446454  656123 cri.go:89] found id: ""
	I1006 14:31:33.446483  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.446493  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:33.446501  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:33.446557  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:33.471998  656123 cri.go:89] found id: ""
	I1006 14:31:33.472013  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.472019  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:33.472025  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:33.472073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:33.498038  656123 cri.go:89] found id: ""
	I1006 14:31:33.498052  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.498058  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:33.498067  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:33.498077  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:33.554956  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:33.547323   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.547831   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549458   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549938   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.551501   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:33.547323   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.547831   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549458   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549938   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.551501   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:33.554967  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:33.554978  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:33.617723  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:33.617747  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:33.647466  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:33.647482  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:33.718107  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:33.718128  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:36.233955  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:36.245297  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:36.245362  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:36.272483  656123 cri.go:89] found id: ""
	I1006 14:31:36.272502  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.272509  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:36.272515  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:36.272574  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:36.299177  656123 cri.go:89] found id: ""
	I1006 14:31:36.299192  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.299199  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:36.299229  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:36.299284  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:36.325899  656123 cri.go:89] found id: ""
	I1006 14:31:36.325920  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.325938  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:36.325946  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:36.326000  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:36.353043  656123 cri.go:89] found id: ""
	I1006 14:31:36.353059  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.353065  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:36.353070  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:36.353117  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:36.379229  656123 cri.go:89] found id: ""
	I1006 14:31:36.379249  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.379259  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:36.379263  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:36.379320  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:36.407572  656123 cri.go:89] found id: ""
	I1006 14:31:36.407589  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.407596  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:36.407601  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:36.407651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:36.435005  656123 cri.go:89] found id: ""
	I1006 14:31:36.435022  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.435028  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:36.435036  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:36.435047  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:36.512293  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:36.512319  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:36.526942  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:36.526966  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:36.587325  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:36.579436   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.579991   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.581727   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.582244   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.583796   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:36.579436   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.579991   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.581727   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.582244   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.583796   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:36.587336  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:36.587349  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:36.648638  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:36.648672  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:39.181798  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:39.193122  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:39.193188  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:39.221286  656123 cri.go:89] found id: ""
	I1006 14:31:39.221304  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.221312  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:39.221317  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:39.221376  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:39.248422  656123 cri.go:89] found id: ""
	I1006 14:31:39.248437  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.248445  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:39.248450  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:39.248497  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:39.277291  656123 cri.go:89] found id: ""
	I1006 14:31:39.277308  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.277316  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:39.277322  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:39.277390  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:39.303982  656123 cri.go:89] found id: ""
	I1006 14:31:39.303999  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.304005  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:39.304011  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:39.304062  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:39.330654  656123 cri.go:89] found id: ""
	I1006 14:31:39.330674  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.330681  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:39.330686  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:39.330732  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:39.357141  656123 cri.go:89] found id: ""
	I1006 14:31:39.357156  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.357163  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:39.357168  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:39.357241  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:39.383968  656123 cri.go:89] found id: ""
	I1006 14:31:39.383986  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.383993  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:39.384002  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:39.384014  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:39.451579  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:39.451604  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:39.465454  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:39.465475  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:39.523259  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:39.515550   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.516185   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.517720   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.518181   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.519823   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:39.515550   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.516185   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.517720   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.518181   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.519823   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:39.523273  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:39.523285  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:39.585241  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:39.585265  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:42.115015  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:42.126583  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:42.126634  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:42.153385  656123 cri.go:89] found id: ""
	I1006 14:31:42.153406  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.153416  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:42.153422  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:42.153479  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:42.181021  656123 cri.go:89] found id: ""
	I1006 14:31:42.181039  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.181049  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:42.181055  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:42.181116  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:42.208104  656123 cri.go:89] found id: ""
	I1006 14:31:42.208123  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.208133  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:42.208139  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:42.208190  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:42.235099  656123 cri.go:89] found id: ""
	I1006 14:31:42.235115  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.235123  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:42.235128  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:42.235176  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:42.262052  656123 cri.go:89] found id: ""
	I1006 14:31:42.262072  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.262079  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:42.262084  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:42.262142  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:42.288093  656123 cri.go:89] found id: ""
	I1006 14:31:42.288111  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.288119  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:42.288124  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:42.288179  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:42.314049  656123 cri.go:89] found id: ""
	I1006 14:31:42.314068  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.314076  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:42.314087  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:42.314100  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:42.379866  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:42.379892  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:42.393937  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:42.393965  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:42.452376  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:42.444669   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.445228   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.446633   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.447200   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.448583   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:42.444669   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.445228   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.446633   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.447200   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.448583   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:42.452388  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:42.452400  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:42.513323  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:42.513346  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:45.045836  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:45.056587  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:45.056634  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:45.082895  656123 cri.go:89] found id: ""
	I1006 14:31:45.082913  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.082922  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:45.082930  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:45.082981  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:45.109560  656123 cri.go:89] found id: ""
	I1006 14:31:45.109579  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.109589  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:45.109595  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:45.109651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:45.136033  656123 cri.go:89] found id: ""
	I1006 14:31:45.136055  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.136065  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:45.136072  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:45.136145  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:45.162396  656123 cri.go:89] found id: ""
	I1006 14:31:45.162416  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.162423  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:45.162427  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:45.162493  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:45.188063  656123 cri.go:89] found id: ""
	I1006 14:31:45.188077  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.188084  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:45.188090  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:45.188135  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:45.214119  656123 cri.go:89] found id: ""
	I1006 14:31:45.214140  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.214150  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:45.214157  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:45.214234  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:45.242147  656123 cri.go:89] found id: ""
	I1006 14:31:45.242166  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.242176  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:45.242187  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:45.242201  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:45.311929  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:45.311952  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:45.324994  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:45.325015  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:45.381458  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:45.373267   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374021   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374992   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.376701   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.377102   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:45.373267   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374021   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374992   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.376701   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.377102   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:45.381470  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:45.381483  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:45.445634  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:45.445652  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:47.975088  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:47.986084  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:47.986144  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:48.013186  656123 cri.go:89] found id: ""
	I1006 14:31:48.013218  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.013229  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:48.013235  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:48.013289  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:48.039286  656123 cri.go:89] found id: ""
	I1006 14:31:48.039301  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.039308  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:48.039313  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:48.039361  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:48.065798  656123 cri.go:89] found id: ""
	I1006 14:31:48.065813  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.065821  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:48.065826  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:48.065873  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:48.091102  656123 cri.go:89] found id: ""
	I1006 14:31:48.091119  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.091128  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:48.091133  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:48.091188  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:48.117766  656123 cri.go:89] found id: ""
	I1006 14:31:48.117783  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.117790  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:48.117795  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:48.117844  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:48.144583  656123 cri.go:89] found id: ""
	I1006 14:31:48.144598  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.144604  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:48.144609  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:48.144655  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:48.171397  656123 cri.go:89] found id: ""
	I1006 14:31:48.171413  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.171421  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:48.171429  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:48.171440  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:48.232721  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:48.232743  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:48.262521  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:48.262540  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:48.332831  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:48.332851  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:48.346228  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:48.346247  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:48.402332  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:48.395067   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.395636   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397181   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397582   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.399142   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:48.395067   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.395636   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397181   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397582   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.399142   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:50.903091  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:50.914581  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:50.914643  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:50.940118  656123 cri.go:89] found id: ""
	I1006 14:31:50.940134  656123 logs.go:282] 0 containers: []
	W1006 14:31:50.940144  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:50.940152  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:50.940244  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:50.967927  656123 cri.go:89] found id: ""
	I1006 14:31:50.967942  656123 logs.go:282] 0 containers: []
	W1006 14:31:50.967950  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:50.967955  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:50.968012  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:50.994911  656123 cri.go:89] found id: ""
	I1006 14:31:50.994926  656123 logs.go:282] 0 containers: []
	W1006 14:31:50.994933  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:50.994938  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:50.994983  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:51.021349  656123 cri.go:89] found id: ""
	I1006 14:31:51.021367  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.021376  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:51.021381  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:51.021450  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:51.047856  656123 cri.go:89] found id: ""
	I1006 14:31:51.047875  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.047885  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:51.047892  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:51.047953  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:51.074984  656123 cri.go:89] found id: ""
	I1006 14:31:51.075002  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.075009  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:51.075014  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:51.075076  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:51.102644  656123 cri.go:89] found id: ""
	I1006 14:31:51.102660  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.102668  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:51.102677  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:51.102692  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:51.164842  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:51.164869  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:51.194272  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:51.194293  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:51.264785  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:51.264809  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:51.279283  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:51.279311  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:51.337565  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:51.329770   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.330346   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.331936   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.332399   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.334039   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:51.329770   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.330346   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.331936   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.332399   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.334039   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:53.839279  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:53.850387  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:53.850446  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:53.878099  656123 cri.go:89] found id: ""
	I1006 14:31:53.878125  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.878135  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:53.878142  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:53.878199  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:53.905974  656123 cri.go:89] found id: ""
	I1006 14:31:53.905994  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.906004  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:53.906011  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:53.906073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:53.934338  656123 cri.go:89] found id: ""
	I1006 14:31:53.934355  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.934362  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:53.934367  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:53.934417  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:53.961409  656123 cri.go:89] found id: ""
	I1006 14:31:53.961428  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.961436  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:53.961442  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:53.961492  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:53.988451  656123 cri.go:89] found id: ""
	I1006 14:31:53.988468  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.988475  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:53.988481  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:53.988541  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:54.015683  656123 cri.go:89] found id: ""
	I1006 14:31:54.015703  656123 logs.go:282] 0 containers: []
	W1006 14:31:54.015712  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:54.015718  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:54.015769  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:54.043179  656123 cri.go:89] found id: ""
	I1006 14:31:54.043196  656123 logs.go:282] 0 containers: []
	W1006 14:31:54.043215  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:54.043226  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:54.043242  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:54.107582  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:54.107606  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:54.138057  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:54.138078  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:54.204366  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:54.204394  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:54.218513  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:54.218535  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:54.279164  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:54.271489   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.272091   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.273620   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.274071   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.275622   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:54.271489   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.272091   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.273620   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.274071   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.275622   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:56.780360  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:56.791915  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:56.791969  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:56.817452  656123 cri.go:89] found id: ""
	I1006 14:31:56.817470  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.817477  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:56.817483  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:56.817529  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:56.842632  656123 cri.go:89] found id: ""
	I1006 14:31:56.842646  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.842653  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:56.842657  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:56.842700  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:56.870346  656123 cri.go:89] found id: ""
	I1006 14:31:56.870361  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.870368  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:56.870373  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:56.870421  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:56.898085  656123 cri.go:89] found id: ""
	I1006 14:31:56.898102  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.898107  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:56.898112  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:56.898172  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:56.925826  656123 cri.go:89] found id: ""
	I1006 14:31:56.925842  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.925849  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:56.925854  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:56.925917  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:56.952736  656123 cri.go:89] found id: ""
	I1006 14:31:56.952753  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.952759  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:56.952764  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:56.952817  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:56.981505  656123 cri.go:89] found id: ""
	I1006 14:31:56.981524  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.981534  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:56.981544  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:56.981558  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:57.038974  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:57.031730   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.032302   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.033897   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.034349   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.035558   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:57.031730   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.032302   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.033897   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.034349   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.035558   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:57.038998  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:57.039009  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:57.104175  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:57.104199  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:57.133096  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:57.133118  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:57.198894  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:57.198924  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:59.714028  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:59.725916  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:59.725972  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:59.751782  656123 cri.go:89] found id: ""
	I1006 14:31:59.751801  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.751810  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:59.751816  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:59.751864  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:59.776851  656123 cri.go:89] found id: ""
	I1006 14:31:59.776867  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.776874  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:59.776878  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:59.776924  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:59.800431  656123 cri.go:89] found id: ""
	I1006 14:31:59.800447  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.800455  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:59.800467  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:59.800530  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:59.825387  656123 cri.go:89] found id: ""
	I1006 14:31:59.825404  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.825412  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:59.825423  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:59.825468  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:59.849169  656123 cri.go:89] found id: ""
	I1006 14:31:59.849186  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.849195  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:59.849232  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:59.849291  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:59.874820  656123 cri.go:89] found id: ""
	I1006 14:31:59.874835  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.874841  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:59.874846  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:59.874893  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:59.900818  656123 cri.go:89] found id: ""
	I1006 14:31:59.900834  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.900840  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:59.900848  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:59.900860  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:59.957989  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:59.950533   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.951047   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.952664   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.953012   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.954540   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:59.950533   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.951047   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.952664   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.953012   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.954540   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:59.958004  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:59.958025  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:00.016244  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:00.016287  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:00.047330  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:00.047346  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:00.111078  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:00.111104  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:02.626253  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:02.637551  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:02.637606  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:02.665023  656123 cri.go:89] found id: ""
	I1006 14:32:02.665040  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.665050  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:02.665056  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:02.665118  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:02.692374  656123 cri.go:89] found id: ""
	I1006 14:32:02.692397  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.692404  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:02.692409  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:02.692458  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:02.719922  656123 cri.go:89] found id: ""
	I1006 14:32:02.719942  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.719953  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:02.719959  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:02.720014  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:02.746934  656123 cri.go:89] found id: ""
	I1006 14:32:02.746950  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.746956  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:02.746962  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:02.747009  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:02.774417  656123 cri.go:89] found id: ""
	I1006 14:32:02.774435  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.774442  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:02.774447  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:02.774496  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:02.801761  656123 cri.go:89] found id: ""
	I1006 14:32:02.801776  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.801783  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:02.801788  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:02.801844  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:02.828981  656123 cri.go:89] found id: ""
	I1006 14:32:02.828998  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.829005  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:02.829014  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:02.829028  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:02.895754  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:02.895778  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:02.909930  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:02.909950  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:02.968533  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:02.961042   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.961577   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963104   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963565   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.965085   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:02.961042   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.961577   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963104   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963565   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.965085   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:02.968546  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:02.968560  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:03.033943  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:03.033967  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:05.566153  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:05.577534  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:05.577601  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:05.604282  656123 cri.go:89] found id: ""
	I1006 14:32:05.604301  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.604311  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:05.604317  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:05.604375  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:05.631089  656123 cri.go:89] found id: ""
	I1006 14:32:05.631105  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.631112  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:05.631116  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:05.631172  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:05.658464  656123 cri.go:89] found id: ""
	I1006 14:32:05.658484  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.658495  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:05.658501  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:05.658559  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:05.685951  656123 cri.go:89] found id: ""
	I1006 14:32:05.685971  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.685980  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:05.685987  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:05.686043  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:05.712003  656123 cri.go:89] found id: ""
	I1006 14:32:05.712020  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.712028  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:05.712033  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:05.712093  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:05.740632  656123 cri.go:89] found id: ""
	I1006 14:32:05.740652  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.740660  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:05.740667  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:05.740728  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:05.766042  656123 cri.go:89] found id: ""
	I1006 14:32:05.766064  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.766072  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:05.766080  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:05.766092  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:05.837102  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:05.837132  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:05.851014  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:05.851038  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:05.910902  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:05.903038   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.903650   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905294   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905834   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.907440   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:05.903038   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.903650   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905294   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905834   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.907440   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:05.910914  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:05.910927  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:05.975171  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:05.975197  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:08.507407  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:08.518312  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:08.518362  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:08.544556  656123 cri.go:89] found id: ""
	I1006 14:32:08.544575  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.544585  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:08.544591  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:08.544646  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:08.569832  656123 cri.go:89] found id: ""
	I1006 14:32:08.569849  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.569858  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:08.569863  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:08.569911  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:08.595352  656123 cri.go:89] found id: ""
	I1006 14:32:08.595368  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.595377  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:08.595383  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:08.595447  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:08.621980  656123 cri.go:89] found id: ""
	I1006 14:32:08.621995  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.622001  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:08.622006  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:08.622062  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:08.648436  656123 cri.go:89] found id: ""
	I1006 14:32:08.648451  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.648458  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:08.648462  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:08.648519  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:08.673561  656123 cri.go:89] found id: ""
	I1006 14:32:08.673579  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.673589  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:08.673595  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:08.673657  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:08.699829  656123 cri.go:89] found id: ""
	I1006 14:32:08.699847  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.699855  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:08.699866  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:08.699884  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:08.712951  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:08.712972  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:08.769035  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:08.761477   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.762001   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.763631   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.764099   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.765640   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:08.761477   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.762001   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.763631   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.764099   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.765640   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:08.769047  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:08.769063  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:08.832511  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:08.832534  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:08.861346  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:08.861364  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:11.430582  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:11.441872  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:11.441923  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:11.467567  656123 cri.go:89] found id: ""
	I1006 14:32:11.467586  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.467596  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:11.467603  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:11.467660  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:11.494656  656123 cri.go:89] found id: ""
	I1006 14:32:11.494683  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.494690  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:11.494695  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:11.494743  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:11.521748  656123 cri.go:89] found id: ""
	I1006 14:32:11.521763  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.521770  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:11.521775  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:11.521820  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:11.548602  656123 cri.go:89] found id: ""
	I1006 14:32:11.548620  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.548626  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:11.548632  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:11.548691  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:11.576572  656123 cri.go:89] found id: ""
	I1006 14:32:11.576588  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.576595  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:11.576600  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:11.576651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:11.603326  656123 cri.go:89] found id: ""
	I1006 14:32:11.603346  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.603355  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:11.603360  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:11.603415  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:11.629710  656123 cri.go:89] found id: ""
	I1006 14:32:11.629728  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.629738  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:11.629747  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:11.629757  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:11.700650  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:11.700726  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:11.714603  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:11.714630  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:11.772602  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:11.764966   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.765455   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767171   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767660   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.769186   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:11.764966   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.765455   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767171   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767660   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.769186   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:11.772614  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:11.772626  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:11.833230  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:11.833254  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:14.365875  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:14.376698  656123 kubeadm.go:601] duration metric: took 4m4.218544485s to restartPrimaryControlPlane
	W1006 14:32:14.376820  656123 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1006 14:32:14.376904  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:32:14.835776  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:32:14.848804  656123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:32:14.857253  656123 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:32:14.857310  656123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:32:14.864786  656123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:32:14.864795  656123 kubeadm.go:157] found existing configuration files:
	
	I1006 14:32:14.864835  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:32:14.872239  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:32:14.872285  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:32:14.879414  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:32:14.886697  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:32:14.886741  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:32:14.893638  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:32:14.900861  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:32:14.900895  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:32:14.907789  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:32:14.914902  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:32:14.914933  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:32:14.921800  656123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:32:14.978601  656123 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:32:15.038549  656123 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:36:17.406896  656123 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:36:17.407019  656123 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:36:17.410627  656123 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:36:17.410683  656123 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:36:17.410779  656123 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:36:17.410840  656123 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:36:17.410869  656123 kubeadm.go:318] OS: Linux
	I1006 14:36:17.410914  656123 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:36:17.410949  656123 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:36:17.411007  656123 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:36:17.411060  656123 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:36:17.411098  656123 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:36:17.411140  656123 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:36:17.411189  656123 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:36:17.411245  656123 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:36:17.411317  656123 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:36:17.411401  656123 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:36:17.411485  656123 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:36:17.411556  656123 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:36:17.413722  656123 out.go:252]   - Generating certificates and keys ...
	I1006 14:36:17.413795  656123 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:36:17.413884  656123 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:36:17.413987  656123 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:36:17.414057  656123 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:36:17.414137  656123 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:36:17.414181  656123 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:36:17.414260  656123 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:36:17.414334  656123 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:36:17.414439  656123 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:36:17.414518  656123 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:36:17.414578  656123 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:36:17.414662  656123 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:36:17.414728  656123 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:36:17.414803  656123 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:36:17.414845  656123 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:36:17.414916  656123 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:36:17.414967  656123 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:36:17.415028  656123 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:36:17.415104  656123 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:36:17.416892  656123 out.go:252]   - Booting up control plane ...
	I1006 14:36:17.416963  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:36:17.417045  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:36:17.417099  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:36:17.417195  656123 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:36:17.417298  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:36:17.417388  656123 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:36:17.417462  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:36:17.417493  656123 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:36:17.417595  656123 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:36:17.417679  656123 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:36:17.417755  656123 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.528699ms
	I1006 14:36:17.417834  656123 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:36:17.417918  656123 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1006 14:36:17.418000  656123 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:36:17.418064  656123 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:36:17.418126  656123 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000416419s
	I1006 14:36:17.418196  656123 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000737625s
	I1006 14:36:17.418279  656123 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00070414s
	I1006 14:36:17.418282  656123 kubeadm.go:318] 
	I1006 14:36:17.418350  656123 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:36:17.418415  656123 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:36:17.418514  656123 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:36:17.418595  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:36:17.418668  656123 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:36:17.418749  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:36:17.418809  656123 kubeadm.go:318] 
	W1006 14:36:17.418920  656123 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.528699ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000416419s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000737625s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00070414s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:36:17.419037  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:36:17.865331  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:36:17.878364  656123 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:36:17.878407  656123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:36:17.886488  656123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:36:17.886495  656123 kubeadm.go:157] found existing configuration files:
	
	I1006 14:36:17.886535  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:36:17.894142  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:36:17.894180  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:36:17.901791  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:36:17.909427  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:36:17.909474  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:36:17.916720  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:36:17.924474  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:36:17.924517  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:36:17.931765  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:36:17.939342  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:36:17.939397  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:36:17.947232  656123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:36:17.986103  656123 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:36:17.986155  656123 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:36:18.005746  656123 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:36:18.005847  656123 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:36:18.005884  656123 kubeadm.go:318] OS: Linux
	I1006 14:36:18.005928  656123 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:36:18.005966  656123 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:36:18.006009  656123 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:36:18.006047  656123 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:36:18.006115  656123 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:36:18.006229  656123 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:36:18.006274  656123 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:36:18.006314  656123 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:36:18.063701  656123 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:36:18.063828  656123 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:36:18.063979  656123 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:36:18.070276  656123 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:36:18.073073  656123 out.go:252]   - Generating certificates and keys ...
	I1006 14:36:18.073146  656123 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:36:18.073230  656123 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:36:18.073310  656123 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:36:18.073360  656123 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:36:18.073469  656123 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:36:18.073537  656123 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:36:18.073593  656123 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:36:18.073643  656123 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:36:18.073731  656123 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:36:18.073828  656123 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:36:18.073881  656123 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:36:18.073950  656123 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:36:18.358369  656123 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:36:18.660416  656123 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:36:18.904822  656123 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:36:19.181972  656123 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:36:19.419333  656123 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:36:19.419883  656123 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:36:19.422018  656123 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:36:19.424552  656123 out.go:252]   - Booting up control plane ...
	I1006 14:36:19.424633  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:36:19.424695  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:36:19.424766  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:36:19.438773  656123 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:36:19.438935  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:36:19.446167  656123 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:36:19.446370  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:36:19.446407  656123 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:36:19.549636  656123 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:36:19.549773  656123 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:36:21.051643  656123 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501975645s
	I1006 14:36:21.055540  656123 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:36:21.055642  656123 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1006 14:36:21.055761  656123 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:36:21.055838  656123 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:40:21.055953  656123 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	I1006 14:40:21.056046  656123 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	I1006 14:40:21.056101  656123 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	I1006 14:40:21.056104  656123 kubeadm.go:318] 
	I1006 14:40:21.056173  656123 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:40:21.056304  656123 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:40:21.056432  656123 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:40:21.056532  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:40:21.056641  656123 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:40:21.056764  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:40:21.056770  656123 kubeadm.go:318] 
	I1006 14:40:21.060023  656123 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:40:21.060145  656123 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:40:21.060722  656123 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	I1006 14:40:21.060819  656123 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:40:21.060909  656123 kubeadm.go:402] duration metric: took 12m10.94114452s to StartCluster
	I1006 14:40:21.060976  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:40:21.061036  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:40:21.089107  656123 cri.go:89] found id: ""
	I1006 14:40:21.089130  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.089137  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:40:21.089143  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:40:21.089218  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:40:21.116923  656123 cri.go:89] found id: ""
	I1006 14:40:21.116942  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.116948  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:40:21.116954  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:40:21.117001  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:40:21.144161  656123 cri.go:89] found id: ""
	I1006 14:40:21.144196  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.144219  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:40:21.144227  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:40:21.144287  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:40:21.173031  656123 cri.go:89] found id: ""
	I1006 14:40:21.173051  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.173059  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:40:21.173065  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:40:21.173117  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:40:21.200194  656123 cri.go:89] found id: ""
	I1006 14:40:21.200232  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.200242  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:40:21.200249  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:40:21.200313  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:40:21.227692  656123 cri.go:89] found id: ""
	I1006 14:40:21.227708  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.227715  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:40:21.227720  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:40:21.227777  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:40:21.255803  656123 cri.go:89] found id: ""
	I1006 14:40:21.255827  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.255836  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:40:21.255848  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:40:21.255863  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:40:21.269683  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:40:21.269708  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:40:21.330259  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:40:21.322987   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.323612   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.324719   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.325098   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.326635   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:40:21.322987   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.323612   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.324719   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.325098   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.326635   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:40:21.330282  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:40:21.330295  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:40:21.395010  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:40:21.395036  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:40:21.425956  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:40:21.425975  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1006 14:40:21.494244  656123 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:40:21.494316  656123 out.go:285] * 
	W1006 14:40:21.494402  656123 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:40:21.494415  656123 out.go:285] * 
	W1006 14:40:21.496145  656123 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:40:21.499891  656123 out.go:203] 
	W1006 14:40:21.500973  656123 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:40:21.500999  656123 out.go:285] * 
	I1006 14:40:21.502231  656123 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:40:14 functional-135520 crio[5849]: time="2025-10-06T14:40:14.002436576Z" level=info msg="createCtr: removing container d09a83215e7ba678a591274f52a3c4e3bbafe4f50c309bdbad0db08fd40f72ad" id=7954ab01-b4a1-4af8-864a-83bce242e907 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:14 functional-135520 crio[5849]: time="2025-10-06T14:40:14.002464878Z" level=info msg="createCtr: deleting container d09a83215e7ba678a591274f52a3c4e3bbafe4f50c309bdbad0db08fd40f72ad from storage" id=7954ab01-b4a1-4af8-864a-83bce242e907 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:14 functional-135520 crio[5849]: time="2025-10-06T14:40:14.004394482Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-135520_kube-system_9c0f460a73b4e4a7087ce2a722c4cad4_0" id=7954ab01-b4a1-4af8-864a-83bce242e907 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:17 functional-135520 crio[5849]: time="2025-10-06T14:40:17.980597758Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=9307489c-7a13-4906-9ddf-5af7e3827d27 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:17 functional-135520 crio[5849]: time="2025-10-06T14:40:17.981492601Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=a030c920-74cb-44f7-9d05-4afb02030a5a name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:17 functional-135520 crio[5849]: time="2025-10-06T14:40:17.982361324Z" level=info msg="Creating container: kube-system/etcd-functional-135520/etcd" id=8347184d-14c3-48dc-9459-ef660db6f6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:17 functional-135520 crio[5849]: time="2025-10-06T14:40:17.982590193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:17 functional-135520 crio[5849]: time="2025-10-06T14:40:17.985847299Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:17 functional-135520 crio[5849]: time="2025-10-06T14:40:17.986311869Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:18 functional-135520 crio[5849]: time="2025-10-06T14:40:18.001227615Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8347184d-14c3-48dc-9459-ef660db6f6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:18 functional-135520 crio[5849]: time="2025-10-06T14:40:18.00268739Z" level=info msg="createCtr: deleting container ID 33cdc58a0c490dce49db5b8cff183237a957ec9252749f4025e5d44a3011f822 from idIndex" id=8347184d-14c3-48dc-9459-ef660db6f6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:18 functional-135520 crio[5849]: time="2025-10-06T14:40:18.002729594Z" level=info msg="createCtr: removing container 33cdc58a0c490dce49db5b8cff183237a957ec9252749f4025e5d44a3011f822" id=8347184d-14c3-48dc-9459-ef660db6f6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:18 functional-135520 crio[5849]: time="2025-10-06T14:40:18.002765547Z" level=info msg="createCtr: deleting container 33cdc58a0c490dce49db5b8cff183237a957ec9252749f4025e5d44a3011f822 from storage" id=8347184d-14c3-48dc-9459-ef660db6f6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:18 functional-135520 crio[5849]: time="2025-10-06T14:40:18.004797529Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-135520_kube-system_f24ebbe4b3fc964d32e35d345c0d3653_0" id=8347184d-14c3-48dc-9459-ef660db6f6e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:20 functional-135520 crio[5849]: time="2025-10-06T14:40:20.979681419Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=2443f8a8-1b76-4132-aa6d-cfe7c76e178d name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:20 functional-135520 crio[5849]: time="2025-10-06T14:40:20.98042903Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=679e6e26-978c-44f1-a68d-da03ad309e01 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:20 functional-135520 crio[5849]: time="2025-10-06T14:40:20.981333955Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-135520/kube-scheduler" id=d9b13087-0047-423a-b1ba-8f1b16d6d4e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:20 functional-135520 crio[5849]: time="2025-10-06T14:40:20.981653588Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:20 functional-135520 crio[5849]: time="2025-10-06T14:40:20.985602553Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:20 functional-135520 crio[5849]: time="2025-10-06T14:40:20.986026326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:21 functional-135520 crio[5849]: time="2025-10-06T14:40:21.002198437Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d9b13087-0047-423a-b1ba-8f1b16d6d4e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:21 functional-135520 crio[5849]: time="2025-10-06T14:40:21.003805453Z" level=info msg="createCtr: deleting container ID a98ed8aedfa1ef039e7da182e75565487fd26606af94b95038816b9a7b11df7d from idIndex" id=d9b13087-0047-423a-b1ba-8f1b16d6d4e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:21 functional-135520 crio[5849]: time="2025-10-06T14:40:21.00384221Z" level=info msg="createCtr: removing container a98ed8aedfa1ef039e7da182e75565487fd26606af94b95038816b9a7b11df7d" id=d9b13087-0047-423a-b1ba-8f1b16d6d4e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:21 functional-135520 crio[5849]: time="2025-10-06T14:40:21.003874157Z" level=info msg="createCtr: deleting container a98ed8aedfa1ef039e7da182e75565487fd26606af94b95038816b9a7b11df7d from storage" id=d9b13087-0047-423a-b1ba-8f1b16d6d4e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:21 functional-135520 crio[5849]: time="2025-10-06T14:40:21.006007213Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-135520_kube-system_5115bd1eba9594a3f2b99b5d6a4b9d59_0" id=d9b13087-0047-423a-b1ba-8f1b16d6d4e9 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:40:24.702032   15902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:24.702581   15902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:24.704264   15902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:24.704745   15902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:24.706297   15902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:40:24 up  5:22,  0 user,  load average: 0.00, 0.04, 0.24
	Linux functional-135520 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:40:14 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:14 functional-135520 kubelet[14966]: E1006 14:40:14.004762   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-135520" podUID="9c0f460a73b4e4a7087ce2a722c4cad4"
	Oct 06 14:40:16 functional-135520 kubelet[14966]: E1006 14:40:16.019446   14966 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-135520.186beda7023a08f5  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-135520,UID:functional-135520,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-135520 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-135520,},FirstTimestamp:2025-10-06 14:36:20.970989813 +0000 UTC m=+1.419813170,LastTimestamp:2025-10-06 14:36:20.970989813 +0000 UTC m=+1.419813170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-135520,}"
	Oct 06 14:40:17 functional-135520 kubelet[14966]: E1006 14:40:17.081734   14966 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 06 14:40:17 functional-135520 kubelet[14966]: E1006 14:40:17.600142   14966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-135520?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 06 14:40:17 functional-135520 kubelet[14966]: I1006 14:40:17.758034   14966 kubelet_node_status.go:75] "Attempting to register node" node="functional-135520"
	Oct 06 14:40:17 functional-135520 kubelet[14966]: E1006 14:40:17.758411   14966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-135520"
	Oct 06 14:40:17 functional-135520 kubelet[14966]: E1006 14:40:17.980098   14966 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:40:18 functional-135520 kubelet[14966]: E1006 14:40:18.005131   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:40:18 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:18 functional-135520 kubelet[14966]:  > podSandboxID="91ab0a64f17ca953284929376780a86381ab6a8cae1f4af7da89790dc4c0e8df"
	Oct 06 14:40:18 functional-135520 kubelet[14966]: E1006 14:40:18.005270   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:18 functional-135520 kubelet[14966]:         container etcd start failed in pod etcd-functional-135520_kube-system(f24ebbe4b3fc964d32e35d345c0d3653): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:18 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:18 functional-135520 kubelet[14966]: E1006 14:40:18.005308   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-135520" podUID="f24ebbe4b3fc964d32e35d345c0d3653"
	Oct 06 14:40:20 functional-135520 kubelet[14966]: E1006 14:40:20.979281   14966 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:40:20 functional-135520 kubelet[14966]: E1006 14:40:20.993487   14966 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-135520\" not found"
	Oct 06 14:40:21 functional-135520 kubelet[14966]: E1006 14:40:21.006289   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:40:21 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:21 functional-135520 kubelet[14966]:  > podSandboxID="526b997044ad8cc54e45aef5a5faa2edaadc9cabbedd2784eaded2bd6562135f"
	Oct 06 14:40:21 functional-135520 kubelet[14966]: E1006 14:40:21.006389   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:21 functional-135520 kubelet[14966]:         container kube-scheduler start failed in pod kube-scheduler-functional-135520_kube-system(5115bd1eba9594a3f2b99b5d6a4b9d59): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:21 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:21 functional-135520 kubelet[14966]: E1006 14:40:21.006418   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-135520" podUID="5115bd1eba9594a3f2b99b5d6a4b9d59"
	Oct 06 14:40:24 functional-135520 kubelet[14966]: E1006 14:40:24.601137   14966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-135520?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520: exit status 2 (307.090272ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-135520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (1.99s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-135520 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-135520 apply -f testdata/invalidsvc.yaml: exit status 1 (47.361348ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-135520 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-135520 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-135520 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-135520 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-135520 --alsologtostderr -v=1] stderr:
I1006 14:40:40.466716  678531 out.go:360] Setting OutFile to fd 1 ...
I1006 14:40:40.471968  678531 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:40.471992  678531 out.go:374] Setting ErrFile to fd 2...
I1006 14:40:40.471999  678531 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:40.472346  678531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
I1006 14:40:40.472845  678531 mustload.go:65] Loading cluster: functional-135520
I1006 14:40:40.473622  678531 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:40:40.474538  678531 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
I1006 14:40:40.495649  678531 host.go:66] Checking if "functional-135520" exists ...
I1006 14:40:40.496094  678531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1006 14:40:40.562971  678531 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:40:40.55145218 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1006 14:40:40.563146  678531 api_server.go:166] Checking apiserver status ...
I1006 14:40:40.563233  678531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1006 14:40:40.563282  678531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
I1006 14:40:40.583157  678531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
W1006 14:40:40.695479  678531 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1006 14:40:40.697547  678531 out.go:179] * The control-plane node functional-135520 apiserver is not running: (state=Stopped)
I1006 14:40:40.698819  678531 out.go:179]   To start a cluster, run: "minikube start -p functional-135520"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-135520
helpers_test.go:243: (dbg) docker inspect functional-135520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	        "Created": "2025-10-06T14:13:32.283355011Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:13:32.318096257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hosts",
	        "LogPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20-json.log",
	        "Name": "/functional-135520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-135520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-135520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	                "LowerDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-135520",
	                "Source": "/var/lib/docker/volumes/functional-135520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-135520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-135520",
	                "name.minikube.sigs.k8s.io": "functional-135520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6368ffca3e5840f94a34614c511d9f0a0a4ca0d05de4fe1f94c8bfdc332f1a62",
	            "SandboxKey": "/var/run/docker/netns/6368ffca3e58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-135520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d1:94:25:38:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f712be59dd18dac98bed5f234c9f77a39e85277143d6f46285adcd3b0185d552",
	                    "EndpointID": "b816964b653b1b5116e3262dfdc87af272931013ef5b9e2714c9ff7357118a6f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-135520",
	                        "3dd9a226ea42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520: exit status 2 (331.91771ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 logs -n 25
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdspecific-port2551281271/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ ssh       │ functional-135520 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image     │ functional-135520 image ls                                                                                                        │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image     │ functional-135520 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image     │ functional-135520 image save --daemon kicbase/echo-server:functional-135520 --alsologtostderr                                     │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh       │ functional-135520 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh       │ functional-135520 ssh -- ls -la /mount-9p                                                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh       │ functional-135520 ssh sudo umount -f /mount-9p                                                                                    │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ ssh       │ functional-135520 ssh findmnt -T /mount1                                                                                          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ mount     │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount3 --alsologtostderr -v=1                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ mount     │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount1 --alsologtostderr -v=1                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ mount     │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount2 --alsologtostderr -v=1                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ ssh       │ functional-135520 ssh findmnt -T /mount1                                                                                          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh       │ functional-135520 ssh findmnt -T /mount2                                                                                          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh       │ functional-135520 ssh findmnt -T /mount3                                                                                          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ mount     │ -p functional-135520 --kill=true                                                                                                  │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service   │ functional-135520 service list                                                                                                    │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service   │ functional-135520 service list -o json                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service   │ functional-135520 service --namespace=default --https --url hello-node                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service   │ functional-135520 service hello-node --url --format={{.IP}}                                                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service   │ functional-135520 service hello-node --url                                                                                        │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ start     │ -p functional-135520 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ start     │ -p functional-135520 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ start     │ -p functional-135520 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                   │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-135520 --alsologtostderr -v=1                                                                    │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:40:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:40:40.232397  678375 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:40:40.232695  678375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:40:40.232706  678375 out.go:374] Setting ErrFile to fd 2...
	I1006 14:40:40.232710  678375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:40:40.232913  678375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:40:40.233416  678375 out.go:368] Setting JSON to false
	I1006 14:40:40.234527  678375 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19376,"bootTime":1759742264,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:40:40.234623  678375 start.go:140] virtualization: kvm guest
	I1006 14:40:40.236341  678375 out.go:179] * [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:40:40.237443  678375 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:40:40.237480  678375 notify.go:220] Checking for updates...
	I1006 14:40:40.239720  678375 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:40:40.240829  678375 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:40:40.241859  678375 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:40:40.242876  678375 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:40:40.243805  678375 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:40:40.245219  678375 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:40:40.245691  678375 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:40:40.271708  678375 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:40:40.271845  678375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:40:40.332594  678375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:40:40.321774938 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:40:40.332758  678375 docker.go:318] overlay module found
	I1006 14:40:40.333962  678375 out.go:179] * Using the docker driver based on existing profile
	I1006 14:40:40.335324  678375 start.go:304] selected driver: docker
	I1006 14:40:40.335338  678375 start.go:924] validating driver "docker" against &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:40:40.335418  678375 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:40:40.335503  678375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:40:40.404152  678375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:40:40.39324905 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:40:40.405093  678375 cni.go:84] Creating CNI manager for ""
	I1006 14:40:40.405186  678375 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:40:40.405273  678375 start.go:348] cluster config:
	{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:40:40.407149  678375 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.798335064Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-135520" id=c07c47b5-f123-4df6-aac0-718c9481559f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.798458706Z" level=info msg="Image localhost/kicbase/echo-server:functional-135520 not found" id=c07c47b5-f123-4df6-aac0-718c9481559f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.798490196Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-135520 found" id=c07c47b5-f123-4df6-aac0-718c9481559f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.980963669Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=17b6706e-b500-4524-871f-23df38e70571 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.981925826Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=94f4b8be-c003-4976-9cb9-8a805158b29d name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.982820585Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-135520/kube-scheduler" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.983106395Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.987700403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.988175946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.003670737Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.005132701Z" level=info msg="createCtr: deleting container ID aa3a2f6476915d7b5d9b1bd05a3095d22efa7de7f25df14d6830c1b4bad20c39 from idIndex" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.005171158Z" level=info msg="createCtr: removing container aa3a2f6476915d7b5d9b1bd05a3095d22efa7de7f25df14d6830c1b4bad20c39" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.005225713Z" level=info msg="createCtr: deleting container aa3a2f6476915d7b5d9b1bd05a3095d22efa7de7f25df14d6830c1b4bad20c39 from storage" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.007324024Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-135520_kube-system_5115bd1eba9594a3f2b99b5d6a4b9d59_0" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:39 functional-135520 crio[5849]: time="2025-10-06T14:40:39.980750641Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=ee4ca7c7-ac83-4870-9ade-fa2df648ae3f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:39 functional-135520 crio[5849]: time="2025-10-06T14:40:39.981808962Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=8b71f330-b482-48e5-bcb5-dc885b414478 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:39 functional-135520 crio[5849]: time="2025-10-06T14:40:39.983078192Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-135520/kube-controller-manager" id=76f1f729-27f5-4452-8cb2-354f0a45cfd8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:39 functional-135520 crio[5849]: time="2025-10-06T14:40:39.983452542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:39 functional-135520 crio[5849]: time="2025-10-06T14:40:39.989062942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:39 functional-135520 crio[5849]: time="2025-10-06T14:40:39.991602485Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:40 functional-135520 crio[5849]: time="2025-10-06T14:40:40.010584568Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=76f1f729-27f5-4452-8cb2-354f0a45cfd8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:40 functional-135520 crio[5849]: time="2025-10-06T14:40:40.01256866Z" level=info msg="createCtr: deleting container ID 1598bcba6b2dd999bfc0d02c0f68684bd1b8f0cb195f1cd27ebf377fd1f66153 from idIndex" id=76f1f729-27f5-4452-8cb2-354f0a45cfd8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:40 functional-135520 crio[5849]: time="2025-10-06T14:40:40.012620131Z" level=info msg="createCtr: removing container 1598bcba6b2dd999bfc0d02c0f68684bd1b8f0cb195f1cd27ebf377fd1f66153" id=76f1f729-27f5-4452-8cb2-354f0a45cfd8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:40 functional-135520 crio[5849]: time="2025-10-06T14:40:40.012659775Z" level=info msg="createCtr: deleting container 1598bcba6b2dd999bfc0d02c0f68684bd1b8f0cb195f1cd27ebf377fd1f66153 from storage" id=76f1f729-27f5-4452-8cb2-354f0a45cfd8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:40 functional-135520 crio[5849]: time="2025-10-06T14:40:40.015113141Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-135520_kube-system_09d686e340c6809af92c3f18dc65ef21_0" id=76f1f729-27f5-4452-8cb2-354f0a45cfd8 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:40:41.799045   17952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:41.799720   17952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:41.801468   17952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:41.802126   17952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:41.803926   17952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:40:41 up  5:22,  0 user,  load average: 1.09, 0.28, 0.31
	Linux functional-135520 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:40:31 functional-135520 kubelet[14966]: E1006 14:40:31.602306   14966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-135520?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 06 14:40:31 functional-135520 kubelet[14966]: I1006 14:40:31.764420   14966 kubelet_node_status.go:75] "Attempting to register node" node="functional-135520"
	Oct 06 14:40:31 functional-135520 kubelet[14966]: E1006 14:40:31.764871   14966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-135520"
	Oct 06 14:40:33 functional-135520 kubelet[14966]: E1006 14:40:33.980503   14966 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:40:34 functional-135520 kubelet[14966]: E1006 14:40:34.007644   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:40:34 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:34 functional-135520 kubelet[14966]:  > podSandboxID="526b997044ad8cc54e45aef5a5faa2edaadc9cabbedd2784eaded2bd6562135f"
	Oct 06 14:40:34 functional-135520 kubelet[14966]: E1006 14:40:34.007745   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:34 functional-135520 kubelet[14966]:         container kube-scheduler start failed in pod kube-scheduler-functional-135520_kube-system(5115bd1eba9594a3f2b99b5d6a4b9d59): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:34 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:34 functional-135520 kubelet[14966]: E1006 14:40:34.007777   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-135520" podUID="5115bd1eba9594a3f2b99b5d6a4b9d59"
	Oct 06 14:40:36 functional-135520 kubelet[14966]: E1006 14:40:36.021610   14966 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-135520.186beda7023a08f5  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-135520,UID:functional-135520,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-135520 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-135520,},FirstTimestamp:2025-10-06 14:36:20.970989813 +0000 UTC m=+1.419813170,LastTimestamp:2025-10-06 14:36:20.970989813 +0000 UTC m=+1.419813170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-135520,}"
	Oct 06 14:40:36 functional-135520 kubelet[14966]: E1006 14:40:36.228685   14966 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 06 14:40:38 functional-135520 kubelet[14966]: E1006 14:40:38.603588   14966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-135520?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 06 14:40:38 functional-135520 kubelet[14966]: I1006 14:40:38.766620   14966 kubelet_node_status.go:75] "Attempting to register node" node="functional-135520"
	Oct 06 14:40:38 functional-135520 kubelet[14966]: E1006 14:40:38.766986   14966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-135520"
	Oct 06 14:40:39 functional-135520 kubelet[14966]: E1006 14:40:39.980242   14966 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:40:40 functional-135520 kubelet[14966]: E1006 14:40:40.015489   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:40:40 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:40 functional-135520 kubelet[14966]:  > podSandboxID="e06459a5221479b8f8ca8a805df180001ae8c03ad8ebddffca24e6ba8a2614e8"
	Oct 06 14:40:40 functional-135520 kubelet[14966]: E1006 14:40:40.015615   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:40 functional-135520 kubelet[14966]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-135520_kube-system(09d686e340c6809af92c3f18dc65ef21): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:40 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:40 functional-135520 kubelet[14966]: E1006 14:40:40.015653   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-135520" podUID="09d686e340c6809af92c3f18dc65ef21"
	Oct 06 14:40:40 functional-135520 kubelet[14966]: E1006 14:40:40.994321   14966 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-135520\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520: exit status 2 (312.757899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-135520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 status: exit status 2 (306.510176ms)

                                                
                                                
-- stdout --
	functional-135520
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-amd64 -p functional-135520 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (311.067032ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-amd64 -p functional-135520 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 status -o json: exit status 2 (319.702024ms)

                                                
                                                
-- stdout --
	{"Name":"functional-135520","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-amd64 -p functional-135520 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-135520
helpers_test.go:243: (dbg) docker inspect functional-135520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	        "Created": "2025-10-06T14:13:32.283355011Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:13:32.318096257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hosts",
	        "LogPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20-json.log",
	        "Name": "/functional-135520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-135520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-135520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	                "LowerDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-135520",
	                "Source": "/var/lib/docker/volumes/functional-135520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-135520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-135520",
	                "name.minikube.sigs.k8s.io": "functional-135520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6368ffca3e5840f94a34614c511d9f0a0a4ca0d05de4fe1f94c8bfdc332f1a62",
	            "SandboxKey": "/var/run/docker/netns/6368ffca3e58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-135520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d1:94:25:38:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f712be59dd18dac98bed5f234c9f77a39e85277143d6f46285adcd3b0185d552",
	                    "EndpointID": "b816964b653b1b5116e3262dfdc87af272931013ef5b9e2714c9ff7357118a6f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-135520",
	                        "3dd9a226ea42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520: exit status 2 (307.221428ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 logs -n 25
helpers_test.go:260: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-135520 ssh cat /mount-9p/test-1759761631098316341                                                                                                    │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image   │ functional-135520 image ls                                                                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image save kicbase/echo-server:functional-135520 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image rm kicbase/echo-server:functional-135520 --alsologtostderr                                                                              │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ mount   │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdspecific-port2551281271/001:/mount-9p --alsologtostderr -v=1 --port 46464                               │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ ssh     │ functional-135520 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image   │ functional-135520 image ls                                                                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image save --daemon kicbase/echo-server:functional-135520 --alsologtostderr                                                                   │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh -- ls -la /mount-9p                                                                                                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ ssh     │ functional-135520 ssh findmnt -T /mount1                                                                                                                        │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ mount   │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount3 --alsologtostderr -v=1                                              │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ mount   │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount1 --alsologtostderr -v=1                                              │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ mount   │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount2 --alsologtostderr -v=1                                              │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ ssh     │ functional-135520 ssh findmnt -T /mount1                                                                                                                        │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh findmnt -T /mount2                                                                                                                        │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh findmnt -T /mount3                                                                                                                        │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ mount   │ -p functional-135520 --kill=true                                                                                                                                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service │ functional-135520 service list                                                                                                                                  │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service │ functional-135520 service list -o json                                                                                                                          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service │ functional-135520 service --namespace=default --https --url hello-node                                                                                          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:28:06
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:28:06.515575  656123 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:28:06.515775  656123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:28:06.515777  656123 out.go:374] Setting ErrFile to fd 2...
	I1006 14:28:06.515780  656123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:28:06.515998  656123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:28:06.516461  656123 out.go:368] Setting JSON to false
	I1006 14:28:06.517416  656123 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18622,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:28:06.517495  656123 start.go:140] virtualization: kvm guest
	I1006 14:28:06.519514  656123 out.go:179] * [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:28:06.520800  656123 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:28:06.520851  656123 notify.go:220] Checking for updates...
	I1006 14:28:06.523025  656123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:28:06.524163  656123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:28:06.525184  656123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:28:06.526184  656123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:28:06.527199  656123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:28:06.528788  656123 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:28:06.528884  656123 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:28:06.553892  656123 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:28:06.554005  656123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:28:06.610913  656123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-06 14:28:06.599957285 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:28:06.611014  656123 docker.go:318] overlay module found
	I1006 14:28:06.612730  656123 out.go:179] * Using the docker driver based on existing profile
	I1006 14:28:06.613792  656123 start.go:304] selected driver: docker
	I1006 14:28:06.613801  656123 start.go:924] validating driver "docker" against &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:28:06.613876  656123 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:28:06.613960  656123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:28:06.672658  656123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-06 14:28:06.663055015 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:28:06.673343  656123 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:28:06.673382  656123 cni.go:84] Creating CNI manager for ""
	I1006 14:28:06.673449  656123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:28:06.673491  656123 start.go:348] cluster config:
	{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:28:06.675542  656123 out.go:179] * Starting "functional-135520" primary control-plane node in "functional-135520" cluster
	I1006 14:28:06.676765  656123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:28:06.678012  656123 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:28:06.679109  656123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:28:06.679148  656123 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:28:06.679171  656123 cache.go:58] Caching tarball of preloaded images
	I1006 14:28:06.679229  656123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:28:06.679315  656123 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:28:06.679322  656123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:28:06.679424  656123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/config.json ...
	I1006 14:28:06.701440  656123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:28:06.701451  656123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:28:06.701470  656123 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:28:06.701500  656123 start.go:360] acquireMachinesLock for functional-135520: {Name:mk634323c4619e77647ac9d9aaca492e399526ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:28:06.701582  656123 start.go:364] duration metric: took 55.883µs to acquireMachinesLock for "functional-135520"
	I1006 14:28:06.701608  656123 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:28:06.701614  656123 fix.go:54] fixHost starting: 
	I1006 14:28:06.701815  656123 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:28:06.719582  656123 fix.go:112] recreateIfNeeded on functional-135520: state=Running err=<nil>
	W1006 14:28:06.719608  656123 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:28:06.721479  656123 out.go:252] * Updating the running docker "functional-135520" container ...
	I1006 14:28:06.721509  656123 machine.go:93] provisionDockerMachine start ...
	I1006 14:28:06.721596  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:06.739776  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:06.740016  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:06.740022  656123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:28:06.883328  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:28:06.883355  656123 ubuntu.go:182] provisioning hostname "functional-135520"
	I1006 14:28:06.883416  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:06.901008  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:06.901274  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:06.901282  656123 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-135520 && echo "functional-135520" | sudo tee /etc/hostname
	I1006 14:28:07.054829  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:28:07.054893  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.073103  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:07.073400  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:07.073412  656123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-135520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-135520/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-135520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:28:07.218044  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:28:07.218064  656123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:28:07.218086  656123 ubuntu.go:190] setting up certificates
	I1006 14:28:07.218097  656123 provision.go:84] configureAuth start
	I1006 14:28:07.218147  656123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:28:07.235320  656123 provision.go:143] copyHostCerts
	I1006 14:28:07.235375  656123 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:28:07.235390  656123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:28:07.235462  656123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:28:07.235557  656123 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:28:07.235561  656123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:28:07.235585  656123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:28:07.235653  656123 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:28:07.235656  656123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:28:07.235685  656123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:28:07.235742  656123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.functional-135520 san=[127.0.0.1 192.168.49.2 functional-135520 localhost minikube]
	I1006 14:28:07.452963  656123 provision.go:177] copyRemoteCerts
	I1006 14:28:07.453021  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:28:07.453058  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.470979  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:07.572166  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:28:07.589268  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 14:28:07.606864  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:28:07.624012  656123 provision.go:87] duration metric: took 405.903097ms to configureAuth
	I1006 14:28:07.624031  656123 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:28:07.624198  656123 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:28:07.624358  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.642129  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:07.642348  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:07.642358  656123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:28:07.930562  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:28:07.930579  656123 machine.go:96] duration metric: took 1.209063221s to provisionDockerMachine
	I1006 14:28:07.930589  656123 start.go:293] postStartSetup for "functional-135520" (driver="docker")
	I1006 14:28:07.930598  656123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:28:07.930651  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:28:07.930687  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.948006  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.049510  656123 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:28:08.053027  656123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:28:08.053042  656123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:28:08.053061  656123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:28:08.053110  656123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:28:08.053177  656123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:28:08.053267  656123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> hosts in /etc/test/nested/copy/629719
	I1006 14:28:08.053298  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/629719
	I1006 14:28:08.060796  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:28:08.077999  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts --> /etc/test/nested/copy/629719/hosts (40 bytes)
	I1006 14:28:08.094766  656123 start.go:296] duration metric: took 164.165544ms for postStartSetup
	I1006 14:28:08.094821  656123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:28:08.094852  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:08.112238  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.210200  656123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:28:08.214744  656123 fix.go:56] duration metric: took 1.513121746s for fixHost
	I1006 14:28:08.214763  656123 start.go:83] releasing machines lock for "functional-135520", held for 1.513172056s
	I1006 14:28:08.214831  656123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:28:08.231996  656123 ssh_runner.go:195] Run: cat /version.json
	I1006 14:28:08.232006  656123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:28:08.232033  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:08.232059  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:08.250015  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.250313  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.415268  656123 ssh_runner.go:195] Run: systemctl --version
	I1006 14:28:08.422068  656123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:28:08.458421  656123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:28:08.463104  656123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:28:08.463164  656123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:28:08.471006  656123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:28:08.471018  656123 start.go:495] detecting cgroup driver to use...
	I1006 14:28:08.471045  656123 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:28:08.471088  656123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:28:08.485271  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:28:08.496859  656123 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:28:08.496895  656123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:28:08.510507  656123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:28:08.522301  656123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:28:08.600902  656123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:28:08.681762  656123 docker.go:234] disabling docker service ...
	I1006 14:28:08.681827  656123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:28:08.696663  656123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:28:08.708614  656123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:28:08.788151  656123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:28:08.872163  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:28:08.884753  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:28:08.898897  656123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:28:08.898940  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.907545  656123 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:28:08.907597  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.916027  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.924428  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.932498  656123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:28:08.939984  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.948324  656123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.956705  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.964969  656123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:28:08.971804  656123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:28:08.978693  656123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:28:09.061389  656123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:28:09.170335  656123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:28:09.170401  656123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:28:09.174497  656123 start.go:563] Will wait 60s for crictl version
	I1006 14:28:09.174546  656123 ssh_runner.go:195] Run: which crictl
	I1006 14:28:09.177947  656123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:28:09.201915  656123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:28:09.201972  656123 ssh_runner.go:195] Run: crio --version
	I1006 14:28:09.230589  656123 ssh_runner.go:195] Run: crio --version
	I1006 14:28:09.260606  656123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:28:09.261947  656123 cli_runner.go:164] Run: docker network inspect functional-135520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:28:09.278672  656123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:28:09.284367  656123 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1006 14:28:09.285382  656123 kubeadm.go:883] updating cluster {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:28:09.285546  656123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:28:09.285603  656123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:28:09.318027  656123 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:28:09.318039  656123 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:28:09.318088  656123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:28:09.342904  656123 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:28:09.342917  656123 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:28:09.342923  656123 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1006 14:28:09.343012  656123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-135520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:28:09.343066  656123 ssh_runner.go:195] Run: crio config
	I1006 14:28:09.388889  656123 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1006 14:28:09.388909  656123 cni.go:84] Creating CNI manager for ""
	I1006 14:28:09.388921  656123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:28:09.388932  656123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:28:09.388955  656123 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-135520 NodeName:functional-135520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:28:09.389087  656123 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-135520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:28:09.389140  656123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:28:09.397400  656123 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:28:09.397454  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:28:09.404846  656123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 14:28:09.416672  656123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:28:09.428910  656123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1006 14:28:09.440961  656123 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:28:09.444714  656123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:28:09.533656  656123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:28:09.546185  656123 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520 for IP: 192.168.49.2
	I1006 14:28:09.546197  656123 certs.go:195] generating shared ca certs ...
	I1006 14:28:09.546290  656123 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:28:09.546440  656123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:28:09.546475  656123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:28:09.546482  656123 certs.go:257] generating profile certs ...
	I1006 14:28:09.546559  656123 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key
	I1006 14:28:09.546594  656123 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key.72a46e8e
	I1006 14:28:09.546623  656123 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key
	I1006 14:28:09.546728  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:28:09.546750  656123 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:28:09.546756  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:28:09.546775  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:28:09.546793  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:28:09.546809  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:28:09.546841  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:28:09.547453  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:28:09.564638  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:28:09.581181  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:28:09.597600  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:28:09.614361  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:28:09.630631  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:28:09.647147  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:28:09.663361  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:28:09.679821  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:28:09.696763  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:28:09.713335  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:28:09.729791  656123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:28:09.741445  656123 ssh_runner.go:195] Run: openssl version
	I1006 14:28:09.747314  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:28:09.755183  656123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:28:09.758724  656123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:28:09.758757  656123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:28:09.792226  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:28:09.799947  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:28:09.808163  656123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:28:09.811711  656123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:28:09.811747  656123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:28:09.845740  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:28:09.854138  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:28:09.862651  656123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:28:09.866319  656123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:28:09.866364  656123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:28:09.900583  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:28:09.908997  656123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:28:09.912812  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:28:09.946819  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:28:09.981139  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:28:10.015748  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:28:10.049705  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:28:10.084715  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:28:10.119782  656123 kubeadm.go:400] StartCluster: {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:28:10.119890  656123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:28:10.119973  656123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:28:10.149719  656123 cri.go:89] found id: ""
	I1006 14:28:10.149774  656123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:28:10.158129  656123 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:28:10.158143  656123 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:28:10.158217  656123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:28:10.166324  656123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.166847  656123 kubeconfig.go:125] found "functional-135520" server: "https://192.168.49.2:8441"
	I1006 14:28:10.168240  656123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:28:10.175929  656123 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-06 14:13:37.047601698 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-06 14:28:09.438461717 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1006 14:28:10.175939  656123 kubeadm.go:1160] stopping kube-system containers ...
	I1006 14:28:10.175953  656123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1006 14:28:10.175996  656123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:28:10.204289  656123 cri.go:89] found id: ""
	I1006 14:28:10.204358  656123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1006 14:28:10.246949  656123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:28:10.255460  656123 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  6 14:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  6 14:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  6 14:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  6 14:17 /etc/kubernetes/scheduler.conf
	
	I1006 14:28:10.255526  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:28:10.263528  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:28:10.271432  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.271482  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:28:10.278844  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:28:10.286462  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.286516  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:28:10.293960  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:28:10.301358  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.301414  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:28:10.308882  656123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:28:10.316879  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:10.360770  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.195064  656123 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.834266287s)
	I1006 14:28:12.195115  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.367120  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.417483  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.470408  656123 api_server.go:52] waiting for apiserver process to appear ...
	I1006 14:28:12.470467  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:12.971496  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:13.471359  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:13.971266  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:14.470628  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:14.970727  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:15.470821  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:15.971537  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:16.470947  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:16.970796  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:17.471324  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:17.970807  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:18.471451  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:18.970803  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:19.471285  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:19.970529  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:20.471499  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:20.971288  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:21.471188  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:21.971466  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:22.471502  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:22.971321  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:23.471284  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:23.970994  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:24.470729  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:24.971445  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:25.470644  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:25.970962  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:26.471442  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:26.971311  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:27.470610  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:27.970961  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:28.470640  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:28.971300  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:29.470626  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:29.971278  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:30.471158  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:30.970980  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:31.470603  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:31.971449  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:32.471177  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:32.970617  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:33.471419  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:33.970722  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:34.471271  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:34.970652  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:35.470921  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:35.971492  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:36.470973  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:36.971256  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:37.471394  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:37.970703  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:38.470961  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:38.970907  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:39.471451  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:39.970850  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:40.471304  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:40.971524  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:41.470744  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:41.971222  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:42.471463  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:42.970604  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:43.470720  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:43.970989  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:44.470818  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:44.970672  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:45.470866  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:45.970683  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:46.471245  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:46.970914  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:47.471423  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:47.971442  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:48.470948  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:48.971501  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:49.471382  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:49.970705  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:50.471271  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:50.971251  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:51.471164  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:51.971336  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:52.471372  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:52.970578  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:53.471263  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:53.971000  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:54.471313  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:54.970838  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:55.470657  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:55.970901  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:56.470732  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:56.971609  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:57.470670  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:57.971054  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:58.470843  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:58.971017  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:59.471644  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:59.970666  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:00.471498  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:00.970805  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:01.471435  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:01.970733  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:02.470885  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:02.970839  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:03.470540  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:03.970872  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:04.470727  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:04.970673  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:05.471322  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:05.970626  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:06.470920  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:06.970887  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:07.471415  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:07.970944  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:08.470610  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:08.971309  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:09.470706  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:09.971450  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:10.471425  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:10.971283  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:11.470937  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:11.970687  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:12.471591  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:12.471676  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:12.498988  656123 cri.go:89] found id: ""
	I1006 14:29:12.499014  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.499021  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:12.499026  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:12.499080  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:12.526057  656123 cri.go:89] found id: ""
	I1006 14:29:12.526074  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.526080  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:12.526085  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:12.526164  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:12.553395  656123 cri.go:89] found id: ""
	I1006 14:29:12.553415  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.553426  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:12.553433  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:12.553486  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:12.580815  656123 cri.go:89] found id: ""
	I1006 14:29:12.580836  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.580846  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:12.580870  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:12.580931  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:12.607516  656123 cri.go:89] found id: ""
	I1006 14:29:12.607533  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.607539  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:12.607544  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:12.607607  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:12.634248  656123 cri.go:89] found id: ""
	I1006 14:29:12.634265  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.634272  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:12.634279  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:12.634335  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:12.660860  656123 cri.go:89] found id: ""
	I1006 14:29:12.660876  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.660883  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:12.660893  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:12.660905  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:12.731400  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:12.731425  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:12.745150  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:12.745174  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:12.803068  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:12.795122    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.795709    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797425    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797887    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.799415    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:12.795122    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.795709    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797425    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797887    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.799415    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:12.803085  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:12.803098  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:12.870066  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:12.870091  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:15.401709  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:15.412675  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:15.412725  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:15.438239  656123 cri.go:89] found id: ""
	I1006 14:29:15.438255  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.438264  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:15.438270  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:15.438322  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:15.463684  656123 cri.go:89] found id: ""
	I1006 14:29:15.463701  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.463709  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:15.463715  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:15.463769  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:15.488259  656123 cri.go:89] found id: ""
	I1006 14:29:15.488276  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.488284  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:15.488289  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:15.488347  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:15.514676  656123 cri.go:89] found id: ""
	I1006 14:29:15.514692  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.514699  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:15.514704  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:15.514762  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:15.540755  656123 cri.go:89] found id: ""
	I1006 14:29:15.540770  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.540776  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:15.540781  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:15.540832  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:15.565570  656123 cri.go:89] found id: ""
	I1006 14:29:15.565588  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.565598  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:15.565604  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:15.565651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:15.591845  656123 cri.go:89] found id: ""
	I1006 14:29:15.591860  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.591876  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:15.591885  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:15.591895  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:15.605051  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:15.605069  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:15.662500  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:15.655240    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.655743    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657283    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657783    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.659338    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:15.655240    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.655743    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657283    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657783    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.659338    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:15.662517  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:15.662531  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:15.727404  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:15.727424  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:15.756261  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:15.756279  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:18.330899  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:18.342312  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:18.342369  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:18.367886  656123 cri.go:89] found id: ""
	I1006 14:29:18.367902  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.367912  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:18.367919  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:18.367967  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:18.394659  656123 cri.go:89] found id: ""
	I1006 14:29:18.394676  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.394685  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:18.394691  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:18.394752  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:18.420739  656123 cri.go:89] found id: ""
	I1006 14:29:18.420762  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.420773  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:18.420780  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:18.420844  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:18.446534  656123 cri.go:89] found id: ""
	I1006 14:29:18.446553  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.446560  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:18.446565  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:18.446610  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:18.474847  656123 cri.go:89] found id: ""
	I1006 14:29:18.474867  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.474876  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:18.474882  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:18.474940  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:18.500739  656123 cri.go:89] found id: ""
	I1006 14:29:18.500755  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.500762  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:18.500767  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:18.500817  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:18.526704  656123 cri.go:89] found id: ""
	I1006 14:29:18.526720  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.526726  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:18.526735  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:18.526749  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:18.594578  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:18.594601  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:18.608090  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:18.608110  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:18.665980  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:18.658366    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.658897    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660516    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660915    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.662586    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:18.658366    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.658897    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660516    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660915    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.662586    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:18.665999  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:18.666015  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:18.726769  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:18.726792  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:21.257561  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:21.269556  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:21.269611  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:21.295967  656123 cri.go:89] found id: ""
	I1006 14:29:21.295989  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.296000  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:21.296007  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:21.296062  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:21.323201  656123 cri.go:89] found id: ""
	I1006 14:29:21.323232  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.323240  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:21.323246  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:21.323297  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:21.352254  656123 cri.go:89] found id: ""
	I1006 14:29:21.352271  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.352277  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:21.352282  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:21.352343  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:21.380457  656123 cri.go:89] found id: ""
	I1006 14:29:21.380477  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.380486  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:21.380493  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:21.380559  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:21.408352  656123 cri.go:89] found id: ""
	I1006 14:29:21.408368  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.408375  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:21.408379  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:21.408435  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:21.434925  656123 cri.go:89] found id: ""
	I1006 14:29:21.434941  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.434948  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:21.434953  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:21.435001  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:21.462533  656123 cri.go:89] found id: ""
	I1006 14:29:21.462551  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.462560  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:21.462570  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:21.462587  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:21.532658  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:21.532682  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:21.547259  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:21.547286  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:21.605779  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:21.598199    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.598802    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600396    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600847    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.602071    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:21.598199    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.598802    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600396    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600847    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.602071    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:21.605799  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:21.605816  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:21.670469  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:21.670493  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:24.203350  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:24.214528  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:24.214576  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:24.241149  656123 cri.go:89] found id: ""
	I1006 14:29:24.241173  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.241182  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:24.241187  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:24.241259  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:24.267072  656123 cri.go:89] found id: ""
	I1006 14:29:24.267089  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.267099  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:24.267104  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:24.267157  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:24.292610  656123 cri.go:89] found id: ""
	I1006 14:29:24.292629  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.292639  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:24.292645  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:24.292694  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:24.318386  656123 cri.go:89] found id: ""
	I1006 14:29:24.318403  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.318409  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:24.318414  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:24.318471  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:24.344804  656123 cri.go:89] found id: ""
	I1006 14:29:24.344827  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.344837  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:24.344843  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:24.344893  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:24.372496  656123 cri.go:89] found id: ""
	I1006 14:29:24.372512  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.372518  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:24.372523  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:24.372569  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:24.397473  656123 cri.go:89] found id: ""
	I1006 14:29:24.397489  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.397495  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:24.397503  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:24.397514  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:24.460002  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:24.460024  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:24.492377  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:24.492394  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:24.558943  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:24.558960  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:24.572667  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:24.572685  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:24.631693  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:24.623841    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.624453    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626057    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626493    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.628013    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:24.623841    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.624453    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626057    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626493    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.628013    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:27.132387  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:27.143350  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:27.143429  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:27.169854  656123 cri.go:89] found id: ""
	I1006 14:29:27.169869  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.169877  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:27.169882  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:27.169930  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:27.196448  656123 cri.go:89] found id: ""
	I1006 14:29:27.196464  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.196471  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:27.196476  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:27.196522  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:27.223046  656123 cri.go:89] found id: ""
	I1006 14:29:27.223066  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.223075  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:27.223081  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:27.223147  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:27.249726  656123 cri.go:89] found id: ""
	I1006 14:29:27.249744  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.249751  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:27.249756  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:27.249810  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:27.277358  656123 cri.go:89] found id: ""
	I1006 14:29:27.277376  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.277391  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:27.277398  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:27.277468  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:27.303432  656123 cri.go:89] found id: ""
	I1006 14:29:27.303452  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.303461  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:27.303467  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:27.303524  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:27.330642  656123 cri.go:89] found id: ""
	I1006 14:29:27.330660  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.330666  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:27.330677  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:27.330692  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:27.360553  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:27.360570  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:27.428526  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:27.428550  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:27.442696  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:27.442720  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:27.500958  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:27.493064    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.493671    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495253    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495769    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.497273    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:27.493064    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.493671    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495253    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495769    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.497273    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:27.500983  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:27.500995  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:30.062974  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:30.074243  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:30.074297  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:30.101939  656123 cri.go:89] found id: ""
	I1006 14:29:30.101960  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.101967  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:30.101973  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:30.102021  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:30.130122  656123 cri.go:89] found id: ""
	I1006 14:29:30.130139  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.130145  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:30.130151  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:30.130229  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:30.157742  656123 cri.go:89] found id: ""
	I1006 14:29:30.157759  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.157767  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:30.157773  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:30.157830  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:30.184613  656123 cri.go:89] found id: ""
	I1006 14:29:30.184634  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.184641  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:30.184646  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:30.184696  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:30.212547  656123 cri.go:89] found id: ""
	I1006 14:29:30.212563  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.212577  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:30.212582  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:30.212631  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:30.240288  656123 cri.go:89] found id: ""
	I1006 14:29:30.240303  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.240310  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:30.240315  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:30.240365  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:30.267014  656123 cri.go:89] found id: ""
	I1006 14:29:30.267030  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.267038  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:30.267047  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:30.267062  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:30.280742  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:30.280768  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:30.340211  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:30.332660    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.333170    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.334689    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.335152    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.336640    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:30.332660    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.333170    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.334689    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.335152    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.336640    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:30.340244  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:30.340259  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:30.401294  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:30.401334  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:30.433250  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:30.433271  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:33.006726  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:33.018059  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:33.018122  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:33.045352  656123 cri.go:89] found id: ""
	I1006 14:29:33.045372  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.045380  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:33.045386  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:33.045436  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:33.072234  656123 cri.go:89] found id: ""
	I1006 14:29:33.072252  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.072260  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:33.072265  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:33.072315  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:33.100162  656123 cri.go:89] found id: ""
	I1006 14:29:33.100178  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.100185  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:33.100190  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:33.100258  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:33.128258  656123 cri.go:89] found id: ""
	I1006 14:29:33.128278  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.128288  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:33.128293  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:33.128342  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:33.155116  656123 cri.go:89] found id: ""
	I1006 14:29:33.155146  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.155153  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:33.155158  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:33.155226  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:33.183135  656123 cri.go:89] found id: ""
	I1006 14:29:33.183150  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.183156  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:33.183161  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:33.183243  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:33.209826  656123 cri.go:89] found id: ""
	I1006 14:29:33.209844  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.209851  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:33.209859  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:33.209870  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:33.276119  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:33.276145  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:33.289780  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:33.289805  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:33.346572  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:33.338882    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.339397    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341034    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341541    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.343088    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:33.338882    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.339397    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341034    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341541    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.343088    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:33.346592  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:33.346605  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:33.413643  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:33.413673  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:35.944641  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:35.955753  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:35.955806  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:35.981909  656123 cri.go:89] found id: ""
	I1006 14:29:35.981923  656123 logs.go:282] 0 containers: []
	W1006 14:29:35.981930  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:35.981935  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:35.981981  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:36.006585  656123 cri.go:89] found id: ""
	I1006 14:29:36.006605  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.006615  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:36.006621  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:36.006687  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:36.034185  656123 cri.go:89] found id: ""
	I1006 14:29:36.034211  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.034221  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:36.034228  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:36.034279  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:36.060600  656123 cri.go:89] found id: ""
	I1006 14:29:36.060618  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.060625  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:36.060630  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:36.060676  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:36.086928  656123 cri.go:89] found id: ""
	I1006 14:29:36.086945  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.086953  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:36.086957  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:36.087073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:36.112833  656123 cri.go:89] found id: ""
	I1006 14:29:36.112851  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.112875  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:36.112882  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:36.112944  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:36.139970  656123 cri.go:89] found id: ""
	I1006 14:29:36.139991  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.140002  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:36.140014  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:36.140030  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:36.153360  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:36.153383  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:36.209902  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:36.202455    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.202929    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.204558    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.205025    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.206599    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:36.202455    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.202929    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.204558    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.205025    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.206599    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:36.209916  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:36.209929  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:36.276242  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:36.276264  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:36.305135  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:36.305152  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:38.872573  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:38.884454  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:38.884512  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:38.911055  656123 cri.go:89] found id: ""
	I1006 14:29:38.911071  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.911076  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:38.911081  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:38.911142  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:38.937413  656123 cri.go:89] found id: ""
	I1006 14:29:38.937433  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.937441  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:38.937450  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:38.937529  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:38.963534  656123 cri.go:89] found id: ""
	I1006 14:29:38.963557  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.963564  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:38.963569  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:38.963619  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:38.989811  656123 cri.go:89] found id: ""
	I1006 14:29:38.989825  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.989831  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:38.989836  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:38.989882  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:39.016789  656123 cri.go:89] found id: ""
	I1006 14:29:39.016809  656123 logs.go:282] 0 containers: []
	W1006 14:29:39.016818  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:39.016824  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:39.016876  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:39.042392  656123 cri.go:89] found id: ""
	I1006 14:29:39.042407  656123 logs.go:282] 0 containers: []
	W1006 14:29:39.042413  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:39.042426  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:39.042473  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:39.068836  656123 cri.go:89] found id: ""
	I1006 14:29:39.068852  656123 logs.go:282] 0 containers: []
	W1006 14:29:39.068859  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:39.068867  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:39.068877  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:39.137663  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:39.137689  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:39.151471  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:39.151495  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:39.209176  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:39.201542    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.202107    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.203710    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.204183    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.205768    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:39.201542    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.202107    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.203710    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.204183    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.205768    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:39.209192  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:39.209218  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:39.274008  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:39.274031  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:41.804322  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:41.815323  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:41.815387  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:41.842055  656123 cri.go:89] found id: ""
	I1006 14:29:41.842070  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.842077  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:41.842082  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:41.842129  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:41.868733  656123 cri.go:89] found id: ""
	I1006 14:29:41.868750  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.868756  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:41.868762  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:41.868809  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:41.896710  656123 cri.go:89] found id: ""
	I1006 14:29:41.896732  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.896742  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:41.896750  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:41.896807  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:41.924854  656123 cri.go:89] found id: ""
	I1006 14:29:41.924875  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.924884  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:41.924891  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:41.924950  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:41.952359  656123 cri.go:89] found id: ""
	I1006 14:29:41.952376  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.952382  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:41.952387  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:41.952453  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:41.979613  656123 cri.go:89] found id: ""
	I1006 14:29:41.979629  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.979636  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:41.979640  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:41.979690  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:42.006904  656123 cri.go:89] found id: ""
	I1006 14:29:42.006923  656123 logs.go:282] 0 containers: []
	W1006 14:29:42.006931  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:42.006941  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:42.006953  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:42.020495  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:42.020518  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:42.078512  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:42.070746    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.071276    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.072881    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.073322    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.074846    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:42.070746    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.071276    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.072881    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.073322    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.074846    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:42.078528  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:42.078543  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:42.143410  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:42.143435  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:42.173024  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:42.173042  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:44.740873  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:44.751791  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:44.751852  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:44.777079  656123 cri.go:89] found id: ""
	I1006 14:29:44.777096  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.777103  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:44.777108  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:44.777158  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:44.802137  656123 cri.go:89] found id: ""
	I1006 14:29:44.802151  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.802158  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:44.802163  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:44.802227  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:44.827942  656123 cri.go:89] found id: ""
	I1006 14:29:44.827957  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.827964  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:44.827970  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:44.828014  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:44.853867  656123 cri.go:89] found id: ""
	I1006 14:29:44.853886  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.853894  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:44.853901  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:44.853956  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:44.879907  656123 cri.go:89] found id: ""
	I1006 14:29:44.879923  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.879931  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:44.879937  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:44.879994  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:44.905634  656123 cri.go:89] found id: ""
	I1006 14:29:44.905654  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.905663  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:44.905673  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:44.905731  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:44.932500  656123 cri.go:89] found id: ""
	I1006 14:29:44.932515  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.932524  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:44.932532  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:44.932543  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:44.960602  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:44.960619  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:45.030445  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:45.030474  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:45.043971  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:45.043991  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:45.101230  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:45.093566    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.094142    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.095685    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.096125    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.097721    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:45.093566    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.094142    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.095685    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.096125    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.097721    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:45.101246  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:45.101259  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:47.666091  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:47.677001  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:47.677061  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:47.703386  656123 cri.go:89] found id: ""
	I1006 14:29:47.703404  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.703412  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:47.703423  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:47.703482  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:47.729961  656123 cri.go:89] found id: ""
	I1006 14:29:47.729978  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.729985  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:47.729998  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:47.730046  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:47.757114  656123 cri.go:89] found id: ""
	I1006 14:29:47.757148  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.757155  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:47.757160  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:47.757220  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:47.783979  656123 cri.go:89] found id: ""
	I1006 14:29:47.783997  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.784004  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:47.784008  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:47.784054  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:47.809265  656123 cri.go:89] found id: ""
	I1006 14:29:47.809280  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.809287  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:47.809292  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:47.809337  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:47.834447  656123 cri.go:89] found id: ""
	I1006 14:29:47.834463  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.834470  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:47.834474  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:47.834518  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:47.860785  656123 cri.go:89] found id: ""
	I1006 14:29:47.860802  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.860808  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:47.860817  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:47.860827  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:47.928576  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:47.928600  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:47.942643  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:47.942669  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:48.000352  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:47.992403    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.992971    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.994566    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.995054    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.996597    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:47.992403    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.992971    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.994566    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.995054    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.996597    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:48.000373  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:48.000391  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:48.065612  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:48.065640  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:50.596504  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:50.607654  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:50.607709  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:50.634723  656123 cri.go:89] found id: ""
	I1006 14:29:50.634742  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.634751  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:50.634758  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:50.634821  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:50.662103  656123 cri.go:89] found id: ""
	I1006 14:29:50.662122  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.662152  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:50.662160  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:50.662232  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:50.688627  656123 cri.go:89] found id: ""
	I1006 14:29:50.688646  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.688653  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:50.688658  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:50.688719  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:50.715511  656123 cri.go:89] found id: ""
	I1006 14:29:50.715530  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.715540  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:50.715544  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:50.715608  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:50.742597  656123 cri.go:89] found id: ""
	I1006 14:29:50.742612  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.742619  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:50.742624  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:50.742671  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:50.769656  656123 cri.go:89] found id: ""
	I1006 14:29:50.769672  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.769679  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:50.769684  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:50.769740  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:50.797585  656123 cri.go:89] found id: ""
	I1006 14:29:50.797603  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.797611  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:50.797620  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:50.797631  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:50.811635  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:50.811664  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:50.870641  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:50.863296    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.863835    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865405    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865832    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.866946    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:50.863296    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.863835    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865405    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865832    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.866946    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:50.870652  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:50.870665  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:50.933617  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:50.933644  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:50.964985  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:50.965003  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:53.535109  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:53.545986  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:53.546039  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:53.571300  656123 cri.go:89] found id: ""
	I1006 14:29:53.571315  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.571322  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:53.571328  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:53.571373  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:53.597111  656123 cri.go:89] found id: ""
	I1006 14:29:53.597126  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.597132  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:53.597137  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:53.597188  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:53.621477  656123 cri.go:89] found id: ""
	I1006 14:29:53.621493  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.621500  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:53.621504  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:53.621550  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:53.647877  656123 cri.go:89] found id: ""
	I1006 14:29:53.647891  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.647898  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:53.647902  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:53.647947  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:53.673269  656123 cri.go:89] found id: ""
	I1006 14:29:53.673284  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.673291  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:53.673296  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:53.673356  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:53.698368  656123 cri.go:89] found id: ""
	I1006 14:29:53.698384  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.698390  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:53.698395  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:53.698446  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:53.724452  656123 cri.go:89] found id: ""
	I1006 14:29:53.724471  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.724481  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:53.724491  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:53.724507  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:53.790937  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:53.790959  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:53.804913  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:53.804929  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:53.862094  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:53.854344    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.854872    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856476    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856953    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.858577    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:53.854344    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.854872    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856476    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856953    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.858577    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:53.862111  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:53.862124  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:53.921847  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:53.921867  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:56.452775  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:56.464702  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:56.464760  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:56.491587  656123 cri.go:89] found id: ""
	I1006 14:29:56.491603  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.491609  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:56.491614  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:56.491662  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:56.517138  656123 cri.go:89] found id: ""
	I1006 14:29:56.517157  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.517166  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:56.517170  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:56.517243  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:56.542713  656123 cri.go:89] found id: ""
	I1006 14:29:56.542728  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.542735  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:56.542740  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:56.542787  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:56.568528  656123 cri.go:89] found id: ""
	I1006 14:29:56.568545  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.568554  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:56.568561  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:56.568616  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:56.593881  656123 cri.go:89] found id: ""
	I1006 14:29:56.593897  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.593904  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:56.593909  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:56.593957  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:56.618843  656123 cri.go:89] found id: ""
	I1006 14:29:56.618862  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.618869  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:56.618874  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:56.618931  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:56.644219  656123 cri.go:89] found id: ""
	I1006 14:29:56.644239  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.644249  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:56.644258  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:56.644270  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:56.701345  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:56.693737    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.694299    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.695864    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.696432    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.697961    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:56.693737    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.694299    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.695864    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.696432    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.697961    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:56.701372  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:56.701384  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:56.762071  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:56.762096  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:56.791634  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:56.791656  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:56.857469  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:56.857492  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:59.371748  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:59.383943  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:59.384004  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:59.411674  656123 cri.go:89] found id: ""
	I1006 14:29:59.411695  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.411703  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:59.411712  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:59.411829  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:59.438177  656123 cri.go:89] found id: ""
	I1006 14:29:59.438193  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.438200  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:59.438217  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:59.438276  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:59.467581  656123 cri.go:89] found id: ""
	I1006 14:29:59.467601  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.467611  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:59.467619  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:59.467682  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:59.496610  656123 cri.go:89] found id: ""
	I1006 14:29:59.496626  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.496633  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:59.496638  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:59.496684  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:59.523799  656123 cri.go:89] found id: ""
	I1006 14:29:59.523815  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.523822  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:59.523827  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:59.523889  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:59.550529  656123 cri.go:89] found id: ""
	I1006 14:29:59.550546  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.550553  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:59.550558  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:59.550606  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:59.577487  656123 cri.go:89] found id: ""
	I1006 14:29:59.577503  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.577509  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:59.577518  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:59.577529  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:59.607238  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:59.607260  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:59.676960  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:59.676986  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:59.690846  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:59.690869  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:59.749311  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:59.741475    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.742053    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.743670    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.744122    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.745515    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:59.741475    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.742053    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.743670    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.744122    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.745515    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:59.749329  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:59.749339  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:02.310264  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:02.321519  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:02.321570  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:02.347821  656123 cri.go:89] found id: ""
	I1006 14:30:02.347842  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.347852  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:02.347860  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:02.347920  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:02.373381  656123 cri.go:89] found id: ""
	I1006 14:30:02.373404  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.373412  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:02.373418  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:02.373462  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:02.401169  656123 cri.go:89] found id: ""
	I1006 14:30:02.401189  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.401199  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:02.401215  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:02.401271  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:02.427774  656123 cri.go:89] found id: ""
	I1006 14:30:02.427790  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.427799  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:02.427806  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:02.427858  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:02.453624  656123 cri.go:89] found id: ""
	I1006 14:30:02.453642  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.453652  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:02.453659  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:02.453725  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:02.480503  656123 cri.go:89] found id: ""
	I1006 14:30:02.480520  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.480526  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:02.480531  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:02.480581  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:02.506624  656123 cri.go:89] found id: ""
	I1006 14:30:02.506643  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.506652  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:02.506662  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:02.506675  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:02.575030  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:02.575055  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:02.589240  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:02.589266  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:02.647840  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:02.640193    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.640759    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642327    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642757    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.644424    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:02.640193    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.640759    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642327    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642757    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.644424    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:02.647855  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:02.647866  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:02.710907  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:02.710932  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:05.243556  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:05.254230  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:05.254287  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:05.279490  656123 cri.go:89] found id: ""
	I1006 14:30:05.279506  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.279514  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:05.279520  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:05.279572  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:05.305513  656123 cri.go:89] found id: ""
	I1006 14:30:05.305533  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.305539  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:05.305544  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:05.305591  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:05.331962  656123 cri.go:89] found id: ""
	I1006 14:30:05.331981  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.331990  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:05.331996  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:05.332058  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:05.357789  656123 cri.go:89] found id: ""
	I1006 14:30:05.357807  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.357815  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:05.357820  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:05.357866  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:05.383637  656123 cri.go:89] found id: ""
	I1006 14:30:05.383658  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.383664  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:05.383669  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:05.383715  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:05.408314  656123 cri.go:89] found id: ""
	I1006 14:30:05.408332  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.408341  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:05.408348  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:05.408418  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:05.433843  656123 cri.go:89] found id: ""
	I1006 14:30:05.433861  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.433867  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:05.433876  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:05.433888  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:05.494147  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:05.494176  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:05.523997  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:05.524016  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:05.591019  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:05.591039  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:05.604531  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:05.604546  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:05.660873  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:05.653677    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.654169    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.655684    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.656053    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.657599    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:05.653677    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.654169    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.655684    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.656053    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.657599    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:08.162635  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:08.173492  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:08.173538  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:08.199879  656123 cri.go:89] found id: ""
	I1006 14:30:08.199896  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.199902  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:08.199907  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:08.199954  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:08.225501  656123 cri.go:89] found id: ""
	I1006 14:30:08.225520  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.225531  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:08.225537  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:08.225598  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:08.251711  656123 cri.go:89] found id: ""
	I1006 14:30:08.251730  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.251737  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:08.251742  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:08.251790  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:08.277559  656123 cri.go:89] found id: ""
	I1006 14:30:08.277575  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.277584  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:08.277594  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:08.277656  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:08.303749  656123 cri.go:89] found id: ""
	I1006 14:30:08.303767  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.303776  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:08.303781  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:08.303830  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:08.329034  656123 cri.go:89] found id: ""
	I1006 14:30:08.329053  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.329059  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:08.329064  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:08.329111  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:08.354393  656123 cri.go:89] found id: ""
	I1006 14:30:08.354409  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.354416  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:08.354423  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:08.354434  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:08.416780  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:08.416799  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:08.444904  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:08.444925  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:08.518089  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:08.518111  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:08.531108  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:08.531124  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:08.586529  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:08.578762    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.579607    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581199    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581663    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.583179    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:08.578762    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.579607    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581199    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581663    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.583179    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:11.087318  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:11.098631  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:11.098701  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:11.125423  656123 cri.go:89] found id: ""
	I1006 14:30:11.125441  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.125450  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:11.125456  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:11.125520  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:11.154785  656123 cri.go:89] found id: ""
	I1006 14:30:11.154803  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.154810  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:11.154815  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:11.154868  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:11.180879  656123 cri.go:89] found id: ""
	I1006 14:30:11.180899  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.180908  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:11.180915  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:11.180979  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:11.207281  656123 cri.go:89] found id: ""
	I1006 14:30:11.207308  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.207318  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:11.207326  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:11.207391  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:11.234275  656123 cri.go:89] found id: ""
	I1006 14:30:11.234293  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.234302  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:11.234308  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:11.234379  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:11.261486  656123 cri.go:89] found id: ""
	I1006 14:30:11.261502  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.261508  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:11.261514  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:11.261561  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:11.287155  656123 cri.go:89] found id: ""
	I1006 14:30:11.287173  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.287180  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:11.287189  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:11.287223  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:11.358359  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:11.358383  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:11.372359  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:11.372385  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:11.430998  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:11.423269    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.423805    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425394    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425911    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.427479    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:11.423269    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.423805    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425394    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425911    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.427479    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:11.431012  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:11.431023  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:11.498514  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:11.498538  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:14.030847  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:14.041715  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:14.041763  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:14.067907  656123 cri.go:89] found id: ""
	I1006 14:30:14.067927  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.067938  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:14.067944  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:14.067992  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:14.093781  656123 cri.go:89] found id: ""
	I1006 14:30:14.093800  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.093810  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:14.093817  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:14.093873  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:14.120737  656123 cri.go:89] found id: ""
	I1006 14:30:14.120752  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.120759  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:14.120765  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:14.120825  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:14.148551  656123 cri.go:89] found id: ""
	I1006 14:30:14.148567  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.148575  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:14.148580  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:14.148632  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:14.174943  656123 cri.go:89] found id: ""
	I1006 14:30:14.174960  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.174965  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:14.174970  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:14.175032  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:14.201148  656123 cri.go:89] found id: ""
	I1006 14:30:14.201163  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.201172  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:14.201178  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:14.201245  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:14.228046  656123 cri.go:89] found id: ""
	I1006 14:30:14.228062  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.228068  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:14.228077  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:14.228087  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:14.300889  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:14.300914  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:14.314304  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:14.314326  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:14.370818  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:14.363282    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.363836    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365383    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365793    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.367329    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:14.363282    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.363836    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365383    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365793    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.367329    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:14.370827  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:14.370838  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:14.431681  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:14.431704  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:16.961397  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:16.973165  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:16.973247  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:17.001273  656123 cri.go:89] found id: ""
	I1006 14:30:17.001291  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.001297  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:17.001302  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:17.001354  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:17.027536  656123 cri.go:89] found id: ""
	I1006 14:30:17.027557  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.027565  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:17.027570  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:17.027622  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:17.054924  656123 cri.go:89] found id: ""
	I1006 14:30:17.054940  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.054947  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:17.054953  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:17.055000  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:17.083443  656123 cri.go:89] found id: ""
	I1006 14:30:17.083460  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.083467  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:17.083472  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:17.083522  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:17.111442  656123 cri.go:89] found id: ""
	I1006 14:30:17.111459  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.111467  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:17.111474  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:17.111530  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:17.138310  656123 cri.go:89] found id: ""
	I1006 14:30:17.138329  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.138338  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:17.138344  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:17.138393  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:17.166360  656123 cri.go:89] found id: ""
	I1006 14:30:17.166389  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.166400  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:17.166411  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:17.166427  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:17.238488  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:17.238516  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:17.252654  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:17.252688  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:17.312602  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:17.304484    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.305059    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.306672    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.307166    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.308768    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:17.304484    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.305059    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.306672    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.307166    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.308768    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:17.312623  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:17.312634  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:17.375185  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:17.375222  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:19.907611  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:19.918724  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:19.918776  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:19.945244  656123 cri.go:89] found id: ""
	I1006 14:30:19.945264  656123 logs.go:282] 0 containers: []
	W1006 14:30:19.945277  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:19.945285  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:19.945343  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:19.972919  656123 cri.go:89] found id: ""
	I1006 14:30:19.972939  656123 logs.go:282] 0 containers: []
	W1006 14:30:19.972949  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:19.972955  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:19.973008  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:19.999841  656123 cri.go:89] found id: ""
	I1006 14:30:19.999858  656123 logs.go:282] 0 containers: []
	W1006 14:30:19.999864  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:19.999870  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:19.999926  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:20.027271  656123 cri.go:89] found id: ""
	I1006 14:30:20.027290  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.027299  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:20.027306  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:20.027364  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:20.054297  656123 cri.go:89] found id: ""
	I1006 14:30:20.054313  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.054320  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:20.054325  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:20.054380  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:20.081354  656123 cri.go:89] found id: ""
	I1006 14:30:20.081374  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.081380  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:20.081386  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:20.081438  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:20.108256  656123 cri.go:89] found id: ""
	I1006 14:30:20.108273  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.108280  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:20.108289  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:20.108303  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:20.177476  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:20.177501  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:20.191396  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:20.191419  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:20.250424  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:20.242535    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.243129    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.244697    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.245110    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.246705    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:20.242535    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.243129    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.244697    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.245110    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.246705    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:20.250437  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:20.250448  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:20.311404  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:20.311430  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:22.842482  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:22.854386  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:22.854451  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:22.882144  656123 cri.go:89] found id: ""
	I1006 14:30:22.882160  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.882167  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:22.882176  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:22.882244  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:22.908078  656123 cri.go:89] found id: ""
	I1006 14:30:22.908097  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.908106  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:22.908112  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:22.908163  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:22.934596  656123 cri.go:89] found id: ""
	I1006 14:30:22.934613  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.934620  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:22.934624  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:22.934673  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:22.961803  656123 cri.go:89] found id: ""
	I1006 14:30:22.961821  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.961830  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:22.961837  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:22.961889  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:22.988277  656123 cri.go:89] found id: ""
	I1006 14:30:22.988293  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.988300  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:22.988305  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:22.988355  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:23.015411  656123 cri.go:89] found id: ""
	I1006 14:30:23.015428  656123 logs.go:282] 0 containers: []
	W1006 14:30:23.015436  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:23.015441  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:23.015494  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:23.042508  656123 cri.go:89] found id: ""
	I1006 14:30:23.042526  656123 logs.go:282] 0 containers: []
	W1006 14:30:23.042534  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:23.042545  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:23.042558  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:23.110932  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:23.110957  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:23.125294  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:23.125322  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:23.185388  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:23.177268    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.177825    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179508    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179961    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.181496    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:23.177268    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.177825    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179508    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179961    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.181496    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:23.185405  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:23.185418  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:23.246673  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:23.246696  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:25.778383  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:25.789490  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:25.789539  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:25.816713  656123 cri.go:89] found id: ""
	I1006 14:30:25.816731  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.816737  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:25.816742  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:25.816792  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:25.844676  656123 cri.go:89] found id: ""
	I1006 14:30:25.844699  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.844708  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:25.844716  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:25.844784  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:25.872027  656123 cri.go:89] found id: ""
	I1006 14:30:25.872046  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.872054  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:25.872059  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:25.872115  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:25.898454  656123 cri.go:89] found id: ""
	I1006 14:30:25.898473  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.898480  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:25.898486  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:25.898548  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:25.926559  656123 cri.go:89] found id: ""
	I1006 14:30:25.926576  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.926583  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:25.926589  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:25.926638  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:25.953516  656123 cri.go:89] found id: ""
	I1006 14:30:25.953535  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.953544  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:25.953562  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:25.953634  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:25.980962  656123 cri.go:89] found id: ""
	I1006 14:30:25.980978  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.980986  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:25.980994  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:25.981012  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:26.052486  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:26.052510  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:26.066688  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:26.066710  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:26.126899  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:26.118941    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.119633    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121265    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121767    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.123331    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:26.118941    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.119633    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121265    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121767    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.123331    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:26.126912  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:26.126924  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:26.187018  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:26.187047  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:28.721028  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:28.732295  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:28.732361  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:28.759561  656123 cri.go:89] found id: ""
	I1006 14:30:28.759583  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.759592  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:28.759598  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:28.759651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:28.787553  656123 cri.go:89] found id: ""
	I1006 14:30:28.787573  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.787584  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:28.787598  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:28.787653  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:28.813499  656123 cri.go:89] found id: ""
	I1006 14:30:28.813520  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.813529  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:28.813535  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:28.813591  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:28.840441  656123 cri.go:89] found id: ""
	I1006 14:30:28.840462  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.840468  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:28.840474  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:28.840523  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:28.867632  656123 cri.go:89] found id: ""
	I1006 14:30:28.867647  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.867654  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:28.867659  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:28.867709  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:28.895005  656123 cri.go:89] found id: ""
	I1006 14:30:28.895023  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.895029  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:28.895034  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:28.895082  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:28.920965  656123 cri.go:89] found id: ""
	I1006 14:30:28.920983  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.920993  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:28.921003  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:28.921017  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:28.981278  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:28.981302  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:29.010983  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:29.011000  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:29.078541  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:29.078565  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:29.092586  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:29.092613  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:29.151129  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:29.143937    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.144542    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146112    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146650    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.147708    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:29.143937    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.144542    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146112    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146650    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.147708    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:31.652214  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:31.663823  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:31.663891  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:31.690576  656123 cri.go:89] found id: ""
	I1006 14:30:31.690596  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.690606  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:31.690613  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:31.690666  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:31.716874  656123 cri.go:89] found id: ""
	I1006 14:30:31.716894  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.716902  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:31.716907  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:31.716956  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:31.744572  656123 cri.go:89] found id: ""
	I1006 14:30:31.744594  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.744603  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:31.744611  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:31.744681  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:31.771539  656123 cri.go:89] found id: ""
	I1006 14:30:31.771556  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.771565  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:31.771575  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:31.771637  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:31.798102  656123 cri.go:89] found id: ""
	I1006 14:30:31.798118  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.798125  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:31.798131  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:31.798175  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:31.825905  656123 cri.go:89] found id: ""
	I1006 14:30:31.825921  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.825928  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:31.825933  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:31.825985  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:31.853474  656123 cri.go:89] found id: ""
	I1006 14:30:31.853489  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.853496  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:31.853504  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:31.853515  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:31.925541  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:31.925566  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:31.939650  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:31.939676  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:31.998586  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:31.990853   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.991461   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.992961   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.993424   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.994933   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:31.990853   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.991461   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.992961   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.993424   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.994933   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:31.998595  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:31.998606  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:32.058322  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:32.058348  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:34.591129  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:34.602495  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:34.602545  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:34.628973  656123 cri.go:89] found id: ""
	I1006 14:30:34.628991  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.628998  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:34.629003  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:34.629048  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:34.654917  656123 cri.go:89] found id: ""
	I1006 14:30:34.654934  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.654941  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:34.654945  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:34.654997  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:34.680385  656123 cri.go:89] found id: ""
	I1006 14:30:34.680401  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.680408  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:34.680413  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:34.680459  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:34.705914  656123 cri.go:89] found id: ""
	I1006 14:30:34.705929  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.705935  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:34.705940  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:34.705989  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:34.731580  656123 cri.go:89] found id: ""
	I1006 14:30:34.731597  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.731604  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:34.731609  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:34.731661  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:34.756200  656123 cri.go:89] found id: ""
	I1006 14:30:34.756232  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.756239  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:34.756244  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:34.756293  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:34.781770  656123 cri.go:89] found id: ""
	I1006 14:30:34.781785  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.781794  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:34.781802  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:34.781813  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:34.850861  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:34.850884  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:34.864688  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:34.864706  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:34.921713  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:34.914358   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.914917   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916495   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916918   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.918459   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:34.914358   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.914917   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916495   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916918   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.918459   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:34.921723  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:34.921733  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:34.985884  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:34.985906  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:37.516053  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:37.526705  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:37.526751  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:37.551472  656123 cri.go:89] found id: ""
	I1006 14:30:37.551490  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.551500  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:37.551507  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:37.551561  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:37.576603  656123 cri.go:89] found id: ""
	I1006 14:30:37.576619  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.576626  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:37.576630  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:37.576674  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:37.602217  656123 cri.go:89] found id: ""
	I1006 14:30:37.602241  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.602250  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:37.602254  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:37.602300  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:37.627547  656123 cri.go:89] found id: ""
	I1006 14:30:37.627561  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.627567  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:37.627572  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:37.627614  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:37.652434  656123 cri.go:89] found id: ""
	I1006 14:30:37.652451  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.652460  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:37.652467  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:37.652519  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:37.677543  656123 cri.go:89] found id: ""
	I1006 14:30:37.677558  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.677564  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:37.677569  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:37.677611  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:37.701695  656123 cri.go:89] found id: ""
	I1006 14:30:37.701711  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.701718  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:37.701727  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:37.701737  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:37.730832  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:37.730852  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:37.799686  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:37.799708  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:37.813081  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:37.813106  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:37.869274  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:37.861812   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.862406   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.863958   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.864398   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.865877   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:37.861812   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.862406   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.863958   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.864398   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.865877   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:37.869285  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:37.869297  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:40.432488  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:40.443779  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:40.443830  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:40.471502  656123 cri.go:89] found id: ""
	I1006 14:30:40.471520  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.471528  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:40.471533  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:40.471591  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:40.498418  656123 cri.go:89] found id: ""
	I1006 14:30:40.498435  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.498442  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:40.498447  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:40.498495  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:40.525987  656123 cri.go:89] found id: ""
	I1006 14:30:40.526003  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.526009  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:40.526015  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:40.526073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:40.554161  656123 cri.go:89] found id: ""
	I1006 14:30:40.554180  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.554190  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:40.554197  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:40.554262  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:40.581168  656123 cri.go:89] found id: ""
	I1006 14:30:40.581186  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.581193  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:40.581198  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:40.581272  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:40.608862  656123 cri.go:89] found id: ""
	I1006 14:30:40.608879  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.608890  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:40.608899  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:40.608951  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:40.636053  656123 cri.go:89] found id: ""
	I1006 14:30:40.636069  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.636076  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:40.636084  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:40.636096  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:40.649832  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:40.649854  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:40.708143  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:40.700302   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.700800   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702328   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702794   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.704437   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:40.700302   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.700800   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702328   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702794   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.704437   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:40.708157  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:40.708173  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:40.767571  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:40.767598  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:40.798425  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:40.798447  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:43.369172  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:43.380275  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:43.380336  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:43.407137  656123 cri.go:89] found id: ""
	I1006 14:30:43.407166  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.407172  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:43.407178  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:43.407255  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:43.434264  656123 cri.go:89] found id: ""
	I1006 14:30:43.434280  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.434286  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:43.434291  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:43.434344  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:43.460492  656123 cri.go:89] found id: ""
	I1006 14:30:43.460511  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.460521  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:43.460527  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:43.460579  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:43.486096  656123 cri.go:89] found id: ""
	I1006 14:30:43.486112  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.486118  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:43.486123  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:43.486180  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:43.512166  656123 cri.go:89] found id: ""
	I1006 14:30:43.512182  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.512189  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:43.512200  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:43.512274  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:43.540182  656123 cri.go:89] found id: ""
	I1006 14:30:43.540198  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.540225  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:43.540231  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:43.540281  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:43.566257  656123 cri.go:89] found id: ""
	I1006 14:30:43.566276  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.566283  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:43.566291  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:43.566301  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:43.633282  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:43.633308  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:43.646525  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:43.646547  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:43.703245  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:43.695412   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.695958   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.697564   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.698089   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.699634   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:43.695412   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.695958   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.697564   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.698089   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.699634   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:43.703258  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:43.703271  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:43.763009  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:43.763030  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:46.294610  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:46.306608  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:46.306657  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:46.333990  656123 cri.go:89] found id: ""
	I1006 14:30:46.334010  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.334017  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:46.334023  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:46.334071  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:46.360169  656123 cri.go:89] found id: ""
	I1006 14:30:46.360186  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.360193  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:46.360197  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:46.360274  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:46.386526  656123 cri.go:89] found id: ""
	I1006 14:30:46.386543  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.386552  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:46.386559  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:46.386618  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:46.412732  656123 cri.go:89] found id: ""
	I1006 14:30:46.412755  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.412761  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:46.412768  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:46.412819  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:46.437943  656123 cri.go:89] found id: ""
	I1006 14:30:46.437961  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.437969  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:46.437975  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:46.438022  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:46.462227  656123 cri.go:89] found id: ""
	I1006 14:30:46.462245  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.462254  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:46.462259  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:46.462308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:46.486426  656123 cri.go:89] found id: ""
	I1006 14:30:46.486446  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.486455  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:46.486465  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:46.486478  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:46.555804  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:46.555824  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:46.568953  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:46.568977  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:46.625518  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:46.616895   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618433   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618998   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.620647   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.621154   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:46.616895   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618433   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618998   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.620647   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.621154   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:46.625532  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:46.625542  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:46.689026  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:46.689045  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:49.220452  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:49.231376  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:49.231437  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:49.257464  656123 cri.go:89] found id: ""
	I1006 14:30:49.257484  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.257492  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:49.257499  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:49.257549  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:49.282291  656123 cri.go:89] found id: ""
	I1006 14:30:49.282305  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.282315  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:49.282322  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:49.282374  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:49.307787  656123 cri.go:89] found id: ""
	I1006 14:30:49.307806  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.307815  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:49.307821  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:49.307872  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:49.333154  656123 cri.go:89] found id: ""
	I1006 14:30:49.333172  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.333179  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:49.333185  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:49.333252  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:49.359161  656123 cri.go:89] found id: ""
	I1006 14:30:49.359175  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.359183  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:49.359188  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:49.359253  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:49.385380  656123 cri.go:89] found id: ""
	I1006 14:30:49.385398  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.385405  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:49.385410  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:49.385461  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:49.409982  656123 cri.go:89] found id: ""
	I1006 14:30:49.410009  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.410020  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:49.410030  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:49.410043  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:49.470637  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:49.470662  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:49.498568  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:49.498585  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:49.568338  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:49.568355  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:49.581842  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:49.581863  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:49.638518  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:49.631016   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.631575   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633164   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633595   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.635088   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:49.631016   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.631575   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633164   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633595   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.635088   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:52.139121  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:52.151341  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:52.151400  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:52.180909  656123 cri.go:89] found id: ""
	I1006 14:30:52.180929  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.180937  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:52.180943  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:52.181004  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:52.212664  656123 cri.go:89] found id: ""
	I1006 14:30:52.212687  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.212695  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:52.212700  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:52.212753  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:52.242804  656123 cri.go:89] found id: ""
	I1006 14:30:52.242824  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.242833  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:52.242840  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:52.242906  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:52.275408  656123 cri.go:89] found id: ""
	I1006 14:30:52.275428  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.275437  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:52.275443  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:52.275511  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:52.304772  656123 cri.go:89] found id: ""
	I1006 14:30:52.304791  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.304797  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:52.304802  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:52.304855  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:52.334628  656123 cri.go:89] found id: ""
	I1006 14:30:52.334646  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.334665  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:52.334672  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:52.334744  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:52.363535  656123 cri.go:89] found id: ""
	I1006 14:30:52.363551  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.363558  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:52.363567  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:52.363578  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:52.395148  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:52.395172  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:52.467790  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:52.467818  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:52.483589  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:52.483613  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:52.547153  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:52.538900   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.539522   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541194   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541724   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.543496   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:52.538900   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.539522   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541194   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541724   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.543496   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:52.547168  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:52.547191  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:55.111539  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:55.123376  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:55.123432  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:55.151263  656123 cri.go:89] found id: ""
	I1006 14:30:55.151278  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.151285  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:55.151289  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:55.151354  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:55.179099  656123 cri.go:89] found id: ""
	I1006 14:30:55.179116  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.179123  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:55.179127  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:55.179177  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:55.207568  656123 cri.go:89] found id: ""
	I1006 14:30:55.207586  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.207594  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:55.207599  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:55.207653  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:55.236037  656123 cri.go:89] found id: ""
	I1006 14:30:55.236058  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.236068  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:55.236075  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:55.236132  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:55.263286  656123 cri.go:89] found id: ""
	I1006 14:30:55.263304  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.263311  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:55.263316  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:55.263416  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:55.291167  656123 cri.go:89] found id: ""
	I1006 14:30:55.291189  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.291197  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:55.291217  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:55.291271  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:55.318410  656123 cri.go:89] found id: ""
	I1006 14:30:55.318430  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.318440  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:55.318450  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:55.318461  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:55.385160  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:55.385187  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:55.399050  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:55.399076  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:55.458418  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:55.450518   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.451123   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.452726   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.453351   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.454908   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:55.450518   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.451123   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.452726   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.453351   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.454908   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:55.458432  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:55.458448  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:55.524792  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:55.524816  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:58.057888  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:58.068966  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:58.069020  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:58.096398  656123 cri.go:89] found id: ""
	I1006 14:30:58.096415  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.096423  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:58.096428  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:58.096477  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:58.123183  656123 cri.go:89] found id: ""
	I1006 14:30:58.123199  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.123218  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:58.123225  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:58.123278  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:58.149129  656123 cri.go:89] found id: ""
	I1006 14:30:58.149145  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.149152  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:58.149156  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:58.149231  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:58.176154  656123 cri.go:89] found id: ""
	I1006 14:30:58.176171  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.176178  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:58.176183  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:58.176260  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:58.202224  656123 cri.go:89] found id: ""
	I1006 14:30:58.202244  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.202252  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:58.202257  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:58.202308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:58.228701  656123 cri.go:89] found id: ""
	I1006 14:30:58.228722  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.228731  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:58.228738  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:58.228789  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:58.255405  656123 cri.go:89] found id: ""
	I1006 14:30:58.255424  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.255434  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:58.255445  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:58.255463  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:58.326378  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:58.326403  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:58.340088  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:58.340113  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:58.398424  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:58.390470   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.391705   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.392182   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.393789   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.394272   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:58.390470   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.391705   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.392182   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.393789   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.394272   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:58.398434  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:58.398444  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:58.458532  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:58.458557  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:00.988890  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:01.000117  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:01.000187  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:01.027975  656123 cri.go:89] found id: ""
	I1006 14:31:01.027994  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.028005  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:01.028011  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:01.028073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:01.057671  656123 cri.go:89] found id: ""
	I1006 14:31:01.057689  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.057695  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:01.057703  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:01.057753  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:01.086296  656123 cri.go:89] found id: ""
	I1006 14:31:01.086312  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.086319  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:01.086324  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:01.086380  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:01.115804  656123 cri.go:89] found id: ""
	I1006 14:31:01.115828  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.115838  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:01.115846  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:01.115914  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:01.143626  656123 cri.go:89] found id: ""
	I1006 14:31:01.143652  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.143662  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:01.143669  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:01.143730  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:01.173329  656123 cri.go:89] found id: ""
	I1006 14:31:01.173351  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.173358  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:01.173363  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:01.173425  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:01.202447  656123 cri.go:89] found id: ""
	I1006 14:31:01.202464  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.202472  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:01.202481  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:01.202493  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:01.264676  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:01.255680   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.256306   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.258878   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.259545   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.261098   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:01.255680   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.256306   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.258878   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.259545   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.261098   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:01.264688  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:01.264701  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:01.325726  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:01.325755  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:01.357935  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:01.357956  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:01.426320  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:01.426346  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:03.942695  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:03.954165  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:03.954257  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:03.982933  656123 cri.go:89] found id: ""
	I1006 14:31:03.982952  656123 logs.go:282] 0 containers: []
	W1006 14:31:03.982960  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:03.982966  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:03.983023  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:04.010750  656123 cri.go:89] found id: ""
	I1006 14:31:04.010768  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.010775  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:04.010780  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:04.010845  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:04.038408  656123 cri.go:89] found id: ""
	I1006 14:31:04.038430  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.038440  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:04.038446  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:04.038506  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:04.065987  656123 cri.go:89] found id: ""
	I1006 14:31:04.066004  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.066011  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:04.066017  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:04.066064  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:04.092615  656123 cri.go:89] found id: ""
	I1006 14:31:04.092635  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.092645  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:04.092651  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:04.092715  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:04.120296  656123 cri.go:89] found id: ""
	I1006 14:31:04.120314  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.120324  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:04.120331  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:04.120392  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:04.148258  656123 cri.go:89] found id: ""
	I1006 14:31:04.148275  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.148282  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:04.148291  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:04.148303  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:04.162693  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:04.162716  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:04.222565  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:04.214872   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.215499   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.216999   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.217486   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.218767   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:04.214872   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.215499   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.216999   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.217486   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.218767   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:04.222576  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:04.222588  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:04.284619  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:04.284645  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:04.315049  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:04.315067  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:06.880125  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:06.891035  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:06.891100  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:06.919022  656123 cri.go:89] found id: ""
	I1006 14:31:06.919039  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.919054  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:06.919059  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:06.919109  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:06.945007  656123 cri.go:89] found id: ""
	I1006 14:31:06.945023  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.945030  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:06.945035  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:06.945082  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:06.971114  656123 cri.go:89] found id: ""
	I1006 14:31:06.971140  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.971150  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:06.971156  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:06.971219  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:06.997325  656123 cri.go:89] found id: ""
	I1006 14:31:06.997341  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.997349  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:06.997354  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:06.997399  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:07.024483  656123 cri.go:89] found id: ""
	I1006 14:31:07.024503  656123 logs.go:282] 0 containers: []
	W1006 14:31:07.024510  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:07.024515  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:07.024563  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:07.050897  656123 cri.go:89] found id: ""
	I1006 14:31:07.050916  656123 logs.go:282] 0 containers: []
	W1006 14:31:07.050924  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:07.050929  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:07.050988  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:07.076681  656123 cri.go:89] found id: ""
	I1006 14:31:07.076698  656123 logs.go:282] 0 containers: []
	W1006 14:31:07.076706  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:07.076716  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:07.076730  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:07.137015  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:07.137039  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:07.167691  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:07.167711  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:07.236752  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:07.236774  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:07.250497  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:07.250519  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:07.307410  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:07.299651   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.300252   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.301817   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.302267   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.303782   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:07.299651   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.300252   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.301817   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.302267   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.303782   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:09.809076  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:09.819941  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:09.819991  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:09.847047  656123 cri.go:89] found id: ""
	I1006 14:31:09.847066  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.847075  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:09.847082  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:09.847151  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:09.873840  656123 cri.go:89] found id: ""
	I1006 14:31:09.873856  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.873862  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:09.873867  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:09.873923  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:09.900892  656123 cri.go:89] found id: ""
	I1006 14:31:09.900908  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.900914  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:09.900920  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:09.900967  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:09.927801  656123 cri.go:89] found id: ""
	I1006 14:31:09.927822  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.927835  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:09.927842  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:09.927892  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:09.955400  656123 cri.go:89] found id: ""
	I1006 14:31:09.955420  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.955428  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:09.955433  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:09.955484  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:09.981624  656123 cri.go:89] found id: ""
	I1006 14:31:09.981640  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.981647  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:09.981653  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:09.981700  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:10.009693  656123 cri.go:89] found id: ""
	I1006 14:31:10.009710  656123 logs.go:282] 0 containers: []
	W1006 14:31:10.009716  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:10.009724  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:10.009735  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:10.075460  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:10.075492  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:10.089300  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:10.089327  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:10.148123  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:10.140282   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.140860   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142433   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142866   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.144460   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:10.140282   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.140860   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142433   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142866   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.144460   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:10.148152  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:10.148165  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:10.210442  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:10.210473  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:12.742692  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:12.754226  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:12.754289  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:12.783228  656123 cri.go:89] found id: ""
	I1006 14:31:12.783249  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.783256  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:12.783263  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:12.783324  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:12.811693  656123 cri.go:89] found id: ""
	I1006 14:31:12.811715  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.811725  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:12.811732  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:12.811782  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:12.840310  656123 cri.go:89] found id: ""
	I1006 14:31:12.840332  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.840342  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:12.840348  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:12.840402  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:12.869101  656123 cri.go:89] found id: ""
	I1006 14:31:12.869123  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.869131  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:12.869137  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:12.869189  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:12.897605  656123 cri.go:89] found id: ""
	I1006 14:31:12.897623  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.897630  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:12.897635  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:12.897693  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:12.926227  656123 cri.go:89] found id: ""
	I1006 14:31:12.926247  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.926254  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:12.926260  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:12.926308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:12.955298  656123 cri.go:89] found id: ""
	I1006 14:31:12.955315  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.955324  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:12.955334  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:12.955348  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:13.021936  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:13.021962  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:13.036093  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:13.036115  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:13.096234  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:13.088298   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.088908   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090517   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090973   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.092543   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:13.088298   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.088908   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090517   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090973   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.092543   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:13.096246  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:13.096258  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:13.156934  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:13.156960  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:15.689959  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:15.701228  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:15.701301  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:15.727030  656123 cri.go:89] found id: ""
	I1006 14:31:15.727050  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.727059  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:15.727067  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:15.727119  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:15.753392  656123 cri.go:89] found id: ""
	I1006 14:31:15.753409  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.753417  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:15.753421  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:15.753471  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:15.780750  656123 cri.go:89] found id: ""
	I1006 14:31:15.780775  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.780783  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:15.780788  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:15.780842  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:15.807372  656123 cri.go:89] found id: ""
	I1006 14:31:15.807388  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.807401  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:15.807406  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:15.807461  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:15.834188  656123 cri.go:89] found id: ""
	I1006 14:31:15.834222  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.834233  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:15.834240  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:15.834293  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:15.861606  656123 cri.go:89] found id: ""
	I1006 14:31:15.861624  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.861631  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:15.861636  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:15.861702  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:15.888991  656123 cri.go:89] found id: ""
	I1006 14:31:15.889007  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.889014  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:15.889022  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:15.889035  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:15.956002  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:15.956024  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:15.969830  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:15.969850  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:16.026629  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:16.019009   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.019537   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021047   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021513   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.023044   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:16.019009   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.019537   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021047   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021513   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.023044   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:16.026643  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:16.026656  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:16.085192  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:16.085220  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:18.616289  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:18.627239  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:18.627304  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:18.655298  656123 cri.go:89] found id: ""
	I1006 14:31:18.655318  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.655327  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:18.655334  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:18.655392  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:18.682590  656123 cri.go:89] found id: ""
	I1006 14:31:18.682609  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.682616  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:18.682623  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:18.682684  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:18.709329  656123 cri.go:89] found id: ""
	I1006 14:31:18.709349  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.709359  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:18.709366  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:18.709428  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:18.735272  656123 cri.go:89] found id: ""
	I1006 14:31:18.735292  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.735302  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:18.735309  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:18.735370  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:18.761956  656123 cri.go:89] found id: ""
	I1006 14:31:18.761973  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.761980  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:18.761984  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:18.762047  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:18.788186  656123 cri.go:89] found id: ""
	I1006 14:31:18.788224  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.788234  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:18.788241  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:18.788293  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:18.814751  656123 cri.go:89] found id: ""
	I1006 14:31:18.814768  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.814775  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:18.814783  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:18.814793  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:18.874634  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:18.867140   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.867734   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869314   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869766   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.871291   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:18.867140   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.867734   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869314   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869766   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.871291   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:18.874645  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:18.874658  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:18.934741  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:18.934765  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:18.964835  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:18.964857  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:19.034348  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:19.034372  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:21.549097  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:21.560431  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:21.560497  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:21.588270  656123 cri.go:89] found id: ""
	I1006 14:31:21.588285  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.588292  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:21.588297  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:21.588352  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:21.615501  656123 cri.go:89] found id: ""
	I1006 14:31:21.615519  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.615527  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:21.615532  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:21.615590  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:21.643122  656123 cri.go:89] found id: ""
	I1006 14:31:21.643143  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.643150  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:21.643154  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:21.643222  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:21.670611  656123 cri.go:89] found id: ""
	I1006 14:31:21.670628  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.670635  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:21.670642  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:21.670705  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:21.698443  656123 cri.go:89] found id: ""
	I1006 14:31:21.698460  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.698467  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:21.698472  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:21.698521  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:21.726957  656123 cri.go:89] found id: ""
	I1006 14:31:21.726973  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.726981  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:21.726986  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:21.727032  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:21.754606  656123 cri.go:89] found id: ""
	I1006 14:31:21.754628  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.754638  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:21.754648  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:21.754661  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:21.814709  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:21.814731  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:21.846526  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:21.846543  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:21.915125  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:21.915156  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:21.929444  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:21.929482  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:21.988239  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:21.980740   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.981329   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.982927   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.983357   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.984775   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:21.980740   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.981329   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.982927   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.983357   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.984775   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:24.489339  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:24.500246  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:24.500303  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:24.527224  656123 cri.go:89] found id: ""
	I1006 14:31:24.527243  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.527253  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:24.527258  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:24.527309  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:24.552540  656123 cri.go:89] found id: ""
	I1006 14:31:24.552559  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.552567  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:24.552573  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:24.552636  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:24.581110  656123 cri.go:89] found id: ""
	I1006 14:31:24.581125  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.581131  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:24.581138  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:24.581201  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:24.607563  656123 cri.go:89] found id: ""
	I1006 14:31:24.607580  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.607588  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:24.607592  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:24.607649  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:24.633221  656123 cri.go:89] found id: ""
	I1006 14:31:24.633241  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.633249  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:24.633255  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:24.633303  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:24.658521  656123 cri.go:89] found id: ""
	I1006 14:31:24.658540  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.658547  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:24.658552  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:24.658611  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:24.684336  656123 cri.go:89] found id: ""
	I1006 14:31:24.684351  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.684358  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:24.684367  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:24.684381  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:24.743258  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:24.735488   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.735921   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.737653   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.738173   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.739491   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:24.735488   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.735921   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.737653   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.738173   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.739491   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:24.743270  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:24.743283  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:24.802373  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:24.802398  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:24.832699  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:24.832716  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:24.898746  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:24.898768  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:27.413617  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:27.424393  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:27.424454  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:27.452153  656123 cri.go:89] found id: ""
	I1006 14:31:27.452173  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.452181  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:27.452186  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:27.452268  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:27.477797  656123 cri.go:89] found id: ""
	I1006 14:31:27.477815  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.477822  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:27.477827  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:27.477881  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:27.502952  656123 cri.go:89] found id: ""
	I1006 14:31:27.502971  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.502978  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:27.502983  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:27.503039  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:27.529416  656123 cri.go:89] found id: ""
	I1006 14:31:27.529433  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.529440  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:27.529444  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:27.529504  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:27.554632  656123 cri.go:89] found id: ""
	I1006 14:31:27.554651  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.554659  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:27.554664  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:27.554713  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:27.580924  656123 cri.go:89] found id: ""
	I1006 14:31:27.580942  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.580948  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:27.580954  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:27.581007  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:27.605807  656123 cri.go:89] found id: ""
	I1006 14:31:27.605826  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.605836  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:27.605846  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:27.605860  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:27.618904  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:27.618922  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:27.677305  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:27.669937   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.670557   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672091   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672543   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.673638   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:27.669937   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.670557   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672091   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672543   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.673638   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:27.677315  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:27.677326  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:27.739103  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:27.739125  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:27.767028  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:27.767049  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:30.336333  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:30.348665  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:30.348724  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:30.377945  656123 cri.go:89] found id: ""
	I1006 14:31:30.377963  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.377973  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:30.377979  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:30.378035  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:30.406369  656123 cri.go:89] found id: ""
	I1006 14:31:30.406391  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.406400  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:30.406407  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:30.406484  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:30.435610  656123 cri.go:89] found id: ""
	I1006 14:31:30.435634  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.435644  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:30.435650  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:30.435715  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:30.464182  656123 cri.go:89] found id: ""
	I1006 14:31:30.464201  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.464222  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:30.464230  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:30.464285  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:30.493191  656123 cri.go:89] found id: ""
	I1006 14:31:30.493237  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.493254  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:30.493260  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:30.493313  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:30.522664  656123 cri.go:89] found id: ""
	I1006 14:31:30.522684  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.522695  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:30.522702  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:30.522762  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:30.553858  656123 cri.go:89] found id: ""
	I1006 14:31:30.553874  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.553880  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:30.553891  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:30.553905  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:30.625537  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:30.625563  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:30.641100  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:30.641127  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:30.705527  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:30.696933   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.697691   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699345   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699934   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.701560   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:30.696933   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.697691   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699345   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699934   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.701560   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:30.705543  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:30.705560  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:30.768236  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:30.768263  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:33.302531  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:33.314251  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:33.314308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:33.343374  656123 cri.go:89] found id: ""
	I1006 14:31:33.343394  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.343404  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:33.343411  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:33.343491  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:33.369870  656123 cri.go:89] found id: ""
	I1006 14:31:33.369885  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.369891  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:33.369895  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:33.369944  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:33.394611  656123 cri.go:89] found id: ""
	I1006 14:31:33.394631  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.394640  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:33.394647  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:33.394696  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:33.420323  656123 cri.go:89] found id: ""
	I1006 14:31:33.420338  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.420345  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:33.420350  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:33.420399  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:33.446454  656123 cri.go:89] found id: ""
	I1006 14:31:33.446483  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.446493  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:33.446501  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:33.446557  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:33.471998  656123 cri.go:89] found id: ""
	I1006 14:31:33.472013  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.472019  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:33.472025  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:33.472073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:33.498038  656123 cri.go:89] found id: ""
	I1006 14:31:33.498052  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.498058  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:33.498067  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:33.498077  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:33.554956  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:33.547323   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.547831   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549458   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549938   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.551501   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:33.547323   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.547831   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549458   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549938   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.551501   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:33.554967  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:33.554978  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:33.617723  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:33.617747  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:33.647466  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:33.647482  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:33.718107  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:33.718128  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:36.233955  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:36.245297  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:36.245362  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:36.272483  656123 cri.go:89] found id: ""
	I1006 14:31:36.272502  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.272509  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:36.272515  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:36.272574  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:36.299177  656123 cri.go:89] found id: ""
	I1006 14:31:36.299192  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.299199  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:36.299229  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:36.299284  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:36.325899  656123 cri.go:89] found id: ""
	I1006 14:31:36.325920  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.325938  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:36.325946  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:36.326000  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:36.353043  656123 cri.go:89] found id: ""
	I1006 14:31:36.353059  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.353065  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:36.353070  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:36.353117  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:36.379229  656123 cri.go:89] found id: ""
	I1006 14:31:36.379249  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.379259  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:36.379263  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:36.379320  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:36.407572  656123 cri.go:89] found id: ""
	I1006 14:31:36.407589  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.407596  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:36.407601  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:36.407651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:36.435005  656123 cri.go:89] found id: ""
	I1006 14:31:36.435022  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.435028  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:36.435036  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:36.435047  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:36.512293  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:36.512319  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:36.526942  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:36.526966  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:36.587325  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:36.579436   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.579991   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.581727   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.582244   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.583796   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:36.579436   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.579991   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.581727   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.582244   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.583796   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:36.587336  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:36.587349  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:36.648638  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:36.648672  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:39.181798  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:39.193122  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:39.193188  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:39.221286  656123 cri.go:89] found id: ""
	I1006 14:31:39.221304  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.221312  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:39.221317  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:39.221376  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:39.248422  656123 cri.go:89] found id: ""
	I1006 14:31:39.248437  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.248445  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:39.248450  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:39.248497  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:39.277291  656123 cri.go:89] found id: ""
	I1006 14:31:39.277308  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.277316  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:39.277322  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:39.277390  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:39.303982  656123 cri.go:89] found id: ""
	I1006 14:31:39.303999  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.304005  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:39.304011  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:39.304062  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:39.330654  656123 cri.go:89] found id: ""
	I1006 14:31:39.330674  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.330681  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:39.330686  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:39.330732  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:39.357141  656123 cri.go:89] found id: ""
	I1006 14:31:39.357156  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.357163  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:39.357168  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:39.357241  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:39.383968  656123 cri.go:89] found id: ""
	I1006 14:31:39.383986  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.383993  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:39.384002  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:39.384014  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:39.451579  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:39.451604  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:39.465454  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:39.465475  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:39.523259  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:39.515550   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.516185   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.517720   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.518181   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.519823   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:39.515550   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.516185   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.517720   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.518181   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.519823   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:39.523273  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:39.523285  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:39.585241  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:39.585265  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:42.115015  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:42.126583  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:42.126634  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:42.153385  656123 cri.go:89] found id: ""
	I1006 14:31:42.153406  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.153416  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:42.153422  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:42.153479  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:42.181021  656123 cri.go:89] found id: ""
	I1006 14:31:42.181039  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.181049  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:42.181055  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:42.181116  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:42.208104  656123 cri.go:89] found id: ""
	I1006 14:31:42.208123  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.208133  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:42.208139  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:42.208190  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:42.235099  656123 cri.go:89] found id: ""
	I1006 14:31:42.235115  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.235123  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:42.235128  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:42.235176  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:42.262052  656123 cri.go:89] found id: ""
	I1006 14:31:42.262072  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.262079  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:42.262084  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:42.262142  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:42.288093  656123 cri.go:89] found id: ""
	I1006 14:31:42.288111  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.288119  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:42.288124  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:42.288179  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:42.314049  656123 cri.go:89] found id: ""
	I1006 14:31:42.314068  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.314076  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:42.314087  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:42.314100  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:42.379866  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:42.379892  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:42.393937  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:42.393965  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:42.452376  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:42.444669   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.445228   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.446633   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.447200   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.448583   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:42.444669   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.445228   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.446633   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.447200   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.448583   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:42.452388  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:42.452400  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:42.513323  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:42.513346  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:45.045836  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:45.056587  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:45.056634  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:45.082895  656123 cri.go:89] found id: ""
	I1006 14:31:45.082913  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.082922  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:45.082930  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:45.082981  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:45.109560  656123 cri.go:89] found id: ""
	I1006 14:31:45.109579  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.109589  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:45.109595  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:45.109651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:45.136033  656123 cri.go:89] found id: ""
	I1006 14:31:45.136055  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.136065  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:45.136072  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:45.136145  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:45.162396  656123 cri.go:89] found id: ""
	I1006 14:31:45.162416  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.162423  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:45.162427  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:45.162493  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:45.188063  656123 cri.go:89] found id: ""
	I1006 14:31:45.188077  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.188084  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:45.188090  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:45.188135  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:45.214119  656123 cri.go:89] found id: ""
	I1006 14:31:45.214140  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.214150  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:45.214157  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:45.214234  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:45.242147  656123 cri.go:89] found id: ""
	I1006 14:31:45.242166  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.242176  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:45.242187  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:45.242201  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:45.311929  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:45.311952  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:45.324994  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:45.325015  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:45.381458  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:45.373267   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374021   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374992   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.376701   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.377102   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:45.373267   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374021   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374992   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.376701   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.377102   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:45.381470  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:45.381483  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:45.445634  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:45.445652  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:47.975088  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:47.986084  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:47.986144  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:48.013186  656123 cri.go:89] found id: ""
	I1006 14:31:48.013218  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.013229  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:48.013235  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:48.013289  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:48.039286  656123 cri.go:89] found id: ""
	I1006 14:31:48.039301  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.039308  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:48.039313  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:48.039361  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:48.065798  656123 cri.go:89] found id: ""
	I1006 14:31:48.065813  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.065821  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:48.065826  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:48.065873  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:48.091102  656123 cri.go:89] found id: ""
	I1006 14:31:48.091119  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.091128  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:48.091133  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:48.091188  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:48.117766  656123 cri.go:89] found id: ""
	I1006 14:31:48.117783  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.117790  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:48.117795  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:48.117844  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:48.144583  656123 cri.go:89] found id: ""
	I1006 14:31:48.144598  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.144604  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:48.144609  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:48.144655  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:48.171397  656123 cri.go:89] found id: ""
	I1006 14:31:48.171413  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.171421  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:48.171429  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:48.171440  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:48.232721  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:48.232743  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:48.262521  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:48.262540  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:48.332831  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:48.332851  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:48.346228  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:48.346247  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:48.402332  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:48.395067   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.395636   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397181   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397582   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.399142   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:48.395067   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.395636   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397181   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397582   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.399142   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:50.903091  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:50.914581  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:50.914643  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:50.940118  656123 cri.go:89] found id: ""
	I1006 14:31:50.940134  656123 logs.go:282] 0 containers: []
	W1006 14:31:50.940144  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:50.940152  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:50.940244  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:50.967927  656123 cri.go:89] found id: ""
	I1006 14:31:50.967942  656123 logs.go:282] 0 containers: []
	W1006 14:31:50.967950  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:50.967955  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:50.968012  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:50.994911  656123 cri.go:89] found id: ""
	I1006 14:31:50.994926  656123 logs.go:282] 0 containers: []
	W1006 14:31:50.994933  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:50.994938  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:50.994983  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:51.021349  656123 cri.go:89] found id: ""
	I1006 14:31:51.021367  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.021376  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:51.021381  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:51.021450  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:51.047856  656123 cri.go:89] found id: ""
	I1006 14:31:51.047875  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.047885  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:51.047892  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:51.047953  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:51.074984  656123 cri.go:89] found id: ""
	I1006 14:31:51.075002  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.075009  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:51.075014  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:51.075076  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:51.102644  656123 cri.go:89] found id: ""
	I1006 14:31:51.102660  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.102668  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:51.102677  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:51.102692  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:51.164842  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:51.164869  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:51.194272  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:51.194293  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:51.264785  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:51.264809  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:51.279283  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:51.279311  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:51.337565  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:51.329770   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.330346   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.331936   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.332399   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.334039   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:51.329770   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.330346   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.331936   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.332399   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.334039   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:53.839279  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:53.850387  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:53.850446  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:53.878099  656123 cri.go:89] found id: ""
	I1006 14:31:53.878125  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.878135  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:53.878142  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:53.878199  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:53.905974  656123 cri.go:89] found id: ""
	I1006 14:31:53.905994  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.906004  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:53.906011  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:53.906073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:53.934338  656123 cri.go:89] found id: ""
	I1006 14:31:53.934355  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.934362  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:53.934367  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:53.934417  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:53.961409  656123 cri.go:89] found id: ""
	I1006 14:31:53.961428  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.961436  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:53.961442  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:53.961492  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:53.988451  656123 cri.go:89] found id: ""
	I1006 14:31:53.988468  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.988475  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:53.988481  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:53.988541  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:54.015683  656123 cri.go:89] found id: ""
	I1006 14:31:54.015703  656123 logs.go:282] 0 containers: []
	W1006 14:31:54.015712  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:54.015718  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:54.015769  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:54.043179  656123 cri.go:89] found id: ""
	I1006 14:31:54.043196  656123 logs.go:282] 0 containers: []
	W1006 14:31:54.043215  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:54.043226  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:54.043242  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:54.107582  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:54.107606  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:54.138057  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:54.138078  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:54.204366  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:54.204394  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:54.218513  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:54.218535  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:54.279164  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:54.271489   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.272091   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.273620   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.274071   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.275622   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:54.271489   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.272091   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.273620   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.274071   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.275622   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:56.780360  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:56.791915  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:56.791969  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:56.817452  656123 cri.go:89] found id: ""
	I1006 14:31:56.817470  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.817477  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:56.817483  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:56.817529  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:56.842632  656123 cri.go:89] found id: ""
	I1006 14:31:56.842646  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.842653  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:56.842657  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:56.842700  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:56.870346  656123 cri.go:89] found id: ""
	I1006 14:31:56.870361  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.870368  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:56.870373  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:56.870421  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:56.898085  656123 cri.go:89] found id: ""
	I1006 14:31:56.898102  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.898107  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:56.898112  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:56.898172  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:56.925826  656123 cri.go:89] found id: ""
	I1006 14:31:56.925842  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.925849  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:56.925854  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:56.925917  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:56.952736  656123 cri.go:89] found id: ""
	I1006 14:31:56.952753  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.952759  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:56.952764  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:56.952817  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:56.981505  656123 cri.go:89] found id: ""
	I1006 14:31:56.981524  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.981534  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:56.981544  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:56.981558  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:57.038974  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:57.031730   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.032302   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.033897   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.034349   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.035558   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:57.031730   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.032302   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.033897   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.034349   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.035558   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:57.038998  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:57.039009  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:57.104175  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:57.104199  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:57.133096  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:57.133118  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:57.198894  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:57.198924  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:59.714028  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:59.725916  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:59.725972  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:59.751782  656123 cri.go:89] found id: ""
	I1006 14:31:59.751801  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.751810  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:59.751816  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:59.751864  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:59.776851  656123 cri.go:89] found id: ""
	I1006 14:31:59.776867  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.776874  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:59.776878  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:59.776924  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:59.800431  656123 cri.go:89] found id: ""
	I1006 14:31:59.800447  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.800455  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:59.800467  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:59.800530  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:59.825387  656123 cri.go:89] found id: ""
	I1006 14:31:59.825404  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.825412  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:59.825423  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:59.825468  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:59.849169  656123 cri.go:89] found id: ""
	I1006 14:31:59.849186  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.849195  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:59.849232  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:59.849291  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:59.874820  656123 cri.go:89] found id: ""
	I1006 14:31:59.874835  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.874841  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:59.874846  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:59.874893  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:59.900818  656123 cri.go:89] found id: ""
	I1006 14:31:59.900834  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.900840  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:59.900848  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:59.900860  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:59.957989  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:59.950533   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.951047   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.952664   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.953012   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.954540   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:59.950533   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.951047   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.952664   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.953012   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.954540   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:59.958004  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:59.958025  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:00.016244  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:00.016287  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:00.047330  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:00.047346  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:00.111078  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:00.111104  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:02.626253  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:02.637551  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:02.637606  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:02.665023  656123 cri.go:89] found id: ""
	I1006 14:32:02.665040  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.665050  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:02.665056  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:02.665118  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:02.692374  656123 cri.go:89] found id: ""
	I1006 14:32:02.692397  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.692404  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:02.692409  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:02.692458  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:02.719922  656123 cri.go:89] found id: ""
	I1006 14:32:02.719942  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.719953  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:02.719959  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:02.720014  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:02.746934  656123 cri.go:89] found id: ""
	I1006 14:32:02.746950  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.746956  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:02.746962  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:02.747009  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:02.774417  656123 cri.go:89] found id: ""
	I1006 14:32:02.774435  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.774442  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:02.774447  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:02.774496  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:02.801761  656123 cri.go:89] found id: ""
	I1006 14:32:02.801776  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.801783  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:02.801788  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:02.801844  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:02.828981  656123 cri.go:89] found id: ""
	I1006 14:32:02.828998  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.829005  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:02.829014  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:02.829028  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:02.895754  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:02.895778  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:02.909930  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:02.909950  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:02.968533  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:02.961042   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.961577   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963104   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963565   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.965085   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:02.961042   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.961577   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963104   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963565   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.965085   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:02.968546  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:02.968560  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:03.033943  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:03.033967  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:05.566153  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:05.577534  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:05.577601  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:05.604282  656123 cri.go:89] found id: ""
	I1006 14:32:05.604301  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.604311  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:05.604317  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:05.604375  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:05.631089  656123 cri.go:89] found id: ""
	I1006 14:32:05.631105  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.631112  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:05.631116  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:05.631172  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:05.658464  656123 cri.go:89] found id: ""
	I1006 14:32:05.658484  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.658495  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:05.658501  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:05.658559  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:05.685951  656123 cri.go:89] found id: ""
	I1006 14:32:05.685971  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.685980  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:05.685987  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:05.686043  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:05.712003  656123 cri.go:89] found id: ""
	I1006 14:32:05.712020  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.712028  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:05.712033  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:05.712093  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:05.740632  656123 cri.go:89] found id: ""
	I1006 14:32:05.740652  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.740660  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:05.740667  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:05.740728  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:05.766042  656123 cri.go:89] found id: ""
	I1006 14:32:05.766064  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.766072  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:05.766080  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:05.766092  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:05.837102  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:05.837132  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:05.851014  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:05.851038  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:05.910902  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:05.903038   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.903650   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905294   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905834   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.907440   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:05.903038   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.903650   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905294   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905834   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.907440   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:05.910914  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:05.910927  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:05.975171  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:05.975197  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:08.507407  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:08.518312  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:08.518362  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:08.544556  656123 cri.go:89] found id: ""
	I1006 14:32:08.544575  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.544585  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:08.544591  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:08.544646  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:08.569832  656123 cri.go:89] found id: ""
	I1006 14:32:08.569849  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.569858  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:08.569863  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:08.569911  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:08.595352  656123 cri.go:89] found id: ""
	I1006 14:32:08.595368  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.595377  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:08.595383  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:08.595447  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:08.621980  656123 cri.go:89] found id: ""
	I1006 14:32:08.621995  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.622001  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:08.622006  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:08.622062  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:08.648436  656123 cri.go:89] found id: ""
	I1006 14:32:08.648451  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.648458  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:08.648462  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:08.648519  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:08.673561  656123 cri.go:89] found id: ""
	I1006 14:32:08.673579  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.673589  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:08.673595  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:08.673657  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:08.699829  656123 cri.go:89] found id: ""
	I1006 14:32:08.699847  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.699855  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:08.699866  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:08.699884  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:08.712951  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:08.712972  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:08.769035  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:08.761477   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.762001   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.763631   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.764099   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.765640   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:08.761477   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.762001   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.763631   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.764099   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.765640   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:08.769047  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:08.769063  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:08.832511  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:08.832534  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:08.861346  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:08.861364  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:11.430582  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:11.441872  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:11.441923  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:11.467567  656123 cri.go:89] found id: ""
	I1006 14:32:11.467586  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.467596  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:11.467603  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:11.467660  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:11.494656  656123 cri.go:89] found id: ""
	I1006 14:32:11.494683  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.494690  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:11.494695  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:11.494743  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:11.521748  656123 cri.go:89] found id: ""
	I1006 14:32:11.521763  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.521770  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:11.521775  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:11.521820  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:11.548602  656123 cri.go:89] found id: ""
	I1006 14:32:11.548620  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.548626  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:11.548632  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:11.548691  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:11.576572  656123 cri.go:89] found id: ""
	I1006 14:32:11.576588  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.576595  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:11.576600  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:11.576651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:11.603326  656123 cri.go:89] found id: ""
	I1006 14:32:11.603346  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.603355  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:11.603360  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:11.603415  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:11.629710  656123 cri.go:89] found id: ""
	I1006 14:32:11.629728  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.629738  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:11.629747  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:11.629757  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:11.700650  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:11.700726  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:11.714603  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:11.714630  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:11.772602  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:11.764966   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.765455   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767171   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767660   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.769186   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:11.764966   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.765455   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767171   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767660   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.769186   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:11.772614  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:11.772626  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:11.833230  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:11.833254  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:14.365875  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:14.376698  656123 kubeadm.go:601] duration metric: took 4m4.218544485s to restartPrimaryControlPlane
	W1006 14:32:14.376820  656123 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1006 14:32:14.376904  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:32:14.835776  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:32:14.848804  656123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:32:14.857253  656123 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:32:14.857310  656123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:32:14.864786  656123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:32:14.864795  656123 kubeadm.go:157] found existing configuration files:
	
	I1006 14:32:14.864835  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:32:14.872239  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:32:14.872285  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:32:14.879414  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:32:14.886697  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:32:14.886741  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:32:14.893638  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:32:14.900861  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:32:14.900895  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:32:14.907789  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:32:14.914902  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:32:14.914933  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:32:14.921800  656123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:32:14.978601  656123 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:32:15.038549  656123 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:36:17.406896  656123 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:36:17.407019  656123 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:36:17.410627  656123 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:36:17.410683  656123 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:36:17.410779  656123 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:36:17.410840  656123 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:36:17.410869  656123 kubeadm.go:318] OS: Linux
	I1006 14:36:17.410914  656123 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:36:17.410949  656123 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:36:17.411007  656123 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:36:17.411060  656123 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:36:17.411098  656123 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:36:17.411140  656123 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:36:17.411189  656123 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:36:17.411245  656123 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:36:17.411317  656123 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:36:17.411401  656123 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:36:17.411485  656123 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:36:17.411556  656123 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:36:17.413722  656123 out.go:252]   - Generating certificates and keys ...
	I1006 14:36:17.413795  656123 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:36:17.413884  656123 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:36:17.413987  656123 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:36:17.414057  656123 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:36:17.414137  656123 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:36:17.414181  656123 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:36:17.414260  656123 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:36:17.414334  656123 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:36:17.414439  656123 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:36:17.414518  656123 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:36:17.414578  656123 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:36:17.414662  656123 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:36:17.414728  656123 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:36:17.414803  656123 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:36:17.414845  656123 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:36:17.414916  656123 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:36:17.414967  656123 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:36:17.415028  656123 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:36:17.415104  656123 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:36:17.416892  656123 out.go:252]   - Booting up control plane ...
	I1006 14:36:17.416963  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:36:17.417045  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:36:17.417099  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:36:17.417195  656123 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:36:17.417298  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:36:17.417388  656123 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:36:17.417462  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:36:17.417493  656123 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:36:17.417595  656123 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:36:17.417679  656123 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:36:17.417755  656123 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.528699ms
	I1006 14:36:17.417834  656123 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:36:17.417918  656123 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1006 14:36:17.418000  656123 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:36:17.418064  656123 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:36:17.418126  656123 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000416419s
	I1006 14:36:17.418196  656123 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000737625s
	I1006 14:36:17.418279  656123 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00070414s
	I1006 14:36:17.418282  656123 kubeadm.go:318] 
	I1006 14:36:17.418350  656123 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:36:17.418415  656123 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:36:17.418514  656123 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:36:17.418595  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:36:17.418668  656123 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:36:17.418749  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:36:17.418809  656123 kubeadm.go:318] 
	W1006 14:36:17.418920  656123 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.528699ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000416419s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000737625s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00070414s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:36:17.419037  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:36:17.865331  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:36:17.878364  656123 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:36:17.878407  656123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:36:17.886488  656123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:36:17.886495  656123 kubeadm.go:157] found existing configuration files:
	
	I1006 14:36:17.886535  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:36:17.894142  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:36:17.894180  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:36:17.901791  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:36:17.909427  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:36:17.909474  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:36:17.916720  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:36:17.924474  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:36:17.924517  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:36:17.931765  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:36:17.939342  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:36:17.939397  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:36:17.947232  656123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:36:17.986103  656123 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:36:17.986155  656123 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:36:18.005746  656123 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:36:18.005847  656123 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:36:18.005884  656123 kubeadm.go:318] OS: Linux
	I1006 14:36:18.005928  656123 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:36:18.005966  656123 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:36:18.006009  656123 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:36:18.006047  656123 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:36:18.006115  656123 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:36:18.006229  656123 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:36:18.006274  656123 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:36:18.006314  656123 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:36:18.063701  656123 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:36:18.063828  656123 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:36:18.063979  656123 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:36:18.070276  656123 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:36:18.073073  656123 out.go:252]   - Generating certificates and keys ...
	I1006 14:36:18.073146  656123 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:36:18.073230  656123 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:36:18.073310  656123 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:36:18.073360  656123 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:36:18.073469  656123 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:36:18.073537  656123 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:36:18.073593  656123 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:36:18.073643  656123 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:36:18.073731  656123 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:36:18.073828  656123 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:36:18.073881  656123 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:36:18.073950  656123 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:36:18.358369  656123 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:36:18.660416  656123 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:36:18.904822  656123 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:36:19.181972  656123 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:36:19.419333  656123 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:36:19.419883  656123 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:36:19.422018  656123 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:36:19.424552  656123 out.go:252]   - Booting up control plane ...
	I1006 14:36:19.424633  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:36:19.424695  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:36:19.424766  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:36:19.438773  656123 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:36:19.438935  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:36:19.446167  656123 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:36:19.446370  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:36:19.446407  656123 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:36:19.549636  656123 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:36:19.549773  656123 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:36:21.051643  656123 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501975645s
	I1006 14:36:21.055540  656123 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:36:21.055642  656123 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1006 14:36:21.055761  656123 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:36:21.055838  656123 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:40:21.055953  656123 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	I1006 14:40:21.056046  656123 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	I1006 14:40:21.056101  656123 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	I1006 14:40:21.056104  656123 kubeadm.go:318] 
	I1006 14:40:21.056173  656123 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:40:21.056304  656123 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:40:21.056432  656123 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:40:21.056532  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:40:21.056641  656123 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:40:21.056764  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:40:21.056770  656123 kubeadm.go:318] 
	I1006 14:40:21.060023  656123 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:40:21.060145  656123 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:40:21.060722  656123 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	I1006 14:40:21.060819  656123 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:40:21.060909  656123 kubeadm.go:402] duration metric: took 12m10.94114452s to StartCluster
	I1006 14:40:21.060976  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:40:21.061036  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:40:21.089107  656123 cri.go:89] found id: ""
	I1006 14:40:21.089130  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.089137  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:40:21.089143  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:40:21.089218  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:40:21.116923  656123 cri.go:89] found id: ""
	I1006 14:40:21.116942  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.116948  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:40:21.116954  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:40:21.117001  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:40:21.144161  656123 cri.go:89] found id: ""
	I1006 14:40:21.144196  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.144219  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:40:21.144227  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:40:21.144287  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:40:21.173031  656123 cri.go:89] found id: ""
	I1006 14:40:21.173051  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.173059  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:40:21.173065  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:40:21.173117  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:40:21.200194  656123 cri.go:89] found id: ""
	I1006 14:40:21.200232  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.200242  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:40:21.200249  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:40:21.200313  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:40:21.227692  656123 cri.go:89] found id: ""
	I1006 14:40:21.227708  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.227715  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:40:21.227720  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:40:21.227777  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:40:21.255803  656123 cri.go:89] found id: ""
	I1006 14:40:21.255827  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.255836  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:40:21.255848  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:40:21.255863  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:40:21.269683  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:40:21.269708  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:40:21.330259  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:40:21.322987   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.323612   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.324719   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.325098   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.326635   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:40:21.322987   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.323612   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.324719   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.325098   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.326635   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:40:21.330282  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:40:21.330295  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:40:21.395010  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:40:21.395036  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:40:21.425956  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:40:21.425975  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1006 14:40:21.494244  656123 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:40:21.494316  656123 out.go:285] * 
	W1006 14:40:21.494402  656123 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:40:21.494415  656123 out.go:285] * 
	W1006 14:40:21.496145  656123 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:40:21.499891  656123 out.go:203] 
	W1006 14:40:21.500973  656123 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:40:21.500999  656123 out.go:285] * 
	I1006 14:40:21.502231  656123 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:40:32 functional-135520 crio[5849]: time="2025-10-06T14:40:32.89139927Z" level=info msg="Checking image status: kicbase/echo-server:functional-135520" id=2fdc8ae0-74cb-4379-b2fd-000512a96d7e name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:32 functional-135520 crio[5849]: time="2025-10-06T14:40:32.918994026Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-135520" id=ab3fb7f0-e5b4-49fd-8d10-f1f8acba9f64 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:32 functional-135520 crio[5849]: time="2025-10-06T14:40:32.919111647Z" level=info msg="Image docker.io/kicbase/echo-server:functional-135520 not found" id=ab3fb7f0-e5b4-49fd-8d10-f1f8acba9f64 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:32 functional-135520 crio[5849]: time="2025-10-06T14:40:32.919146294Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-135520 found" id=ab3fb7f0-e5b4-49fd-8d10-f1f8acba9f64 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:32 functional-135520 crio[5849]: time="2025-10-06T14:40:32.946070229Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-135520" id=f1b76fb8-2601-4329-a83b-036d044b53a4 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:32 functional-135520 crio[5849]: time="2025-10-06T14:40:32.94625581Z" level=info msg="Image localhost/kicbase/echo-server:functional-135520 not found" id=f1b76fb8-2601-4329-a83b-036d044b53a4 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:32 functional-135520 crio[5849]: time="2025-10-06T14:40:32.946327676Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-135520 found" id=f1b76fb8-2601-4329-a83b-036d044b53a4 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.736074966Z" level=info msg="Checking image status: kicbase/echo-server:functional-135520" id=b0d58989-d35a-49df-b66f-73123c87264c name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.766254225Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-135520" id=33b10420-71fc-4bcf-b97c-005c11159859 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.76639425Z" level=info msg="Image docker.io/kicbase/echo-server:functional-135520 not found" id=33b10420-71fc-4bcf-b97c-005c11159859 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.76642926Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-135520 found" id=33b10420-71fc-4bcf-b97c-005c11159859 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.798335064Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-135520" id=c07c47b5-f123-4df6-aac0-718c9481559f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.798458706Z" level=info msg="Image localhost/kicbase/echo-server:functional-135520 not found" id=c07c47b5-f123-4df6-aac0-718c9481559f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.798490196Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-135520 found" id=c07c47b5-f123-4df6-aac0-718c9481559f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.980963669Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=17b6706e-b500-4524-871f-23df38e70571 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.981925826Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=94f4b8be-c003-4976-9cb9-8a805158b29d name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.982820585Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-135520/kube-scheduler" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.983106395Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.987700403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.988175946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.003670737Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.005132701Z" level=info msg="createCtr: deleting container ID aa3a2f6476915d7b5d9b1bd05a3095d22efa7de7f25df14d6830c1b4bad20c39 from idIndex" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.005171158Z" level=info msg="createCtr: removing container aa3a2f6476915d7b5d9b1bd05a3095d22efa7de7f25df14d6830c1b4bad20c39" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.005225713Z" level=info msg="createCtr: deleting container aa3a2f6476915d7b5d9b1bd05a3095d22efa7de7f25df14d6830c1b4bad20c39 from storage" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.007324024Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-135520_kube-system_5115bd1eba9594a3f2b99b5d6a4b9d59_0" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:40:39.025925   17592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:39.026508   17592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:39.028380   17592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:39.028934   17592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:39.030461   17592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:40:39 up  5:22,  0 user,  load average: 1.09, 0.28, 0.31
	Linux functional-135520 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:40:29 functional-135520 kubelet[14966]: E1006 14:40:29.023668   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:29 functional-135520 kubelet[14966]:         container kube-apiserver start failed in pod kube-apiserver-functional-135520_kube-system(9c0f460a73b4e4a7087ce2a722c4cad4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:29 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:29 functional-135520 kubelet[14966]: E1006 14:40:29.023801   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-135520" podUID="9c0f460a73b4e4a7087ce2a722c4cad4"
	Oct 06 14:40:29 functional-135520 kubelet[14966]: E1006 14:40:29.023746   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:29 functional-135520 kubelet[14966]:         container etcd start failed in pod etcd-functional-135520_kube-system(f24ebbe4b3fc964d32e35d345c0d3653): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:29 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:29 functional-135520 kubelet[14966]: E1006 14:40:29.024948   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-135520" podUID="f24ebbe4b3fc964d32e35d345c0d3653"
	Oct 06 14:40:30 functional-135520 kubelet[14966]: E1006 14:40:30.994095   14966 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-135520\" not found"
	Oct 06 14:40:31 functional-135520 kubelet[14966]: E1006 14:40:31.602306   14966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-135520?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 06 14:40:31 functional-135520 kubelet[14966]: I1006 14:40:31.764420   14966 kubelet_node_status.go:75] "Attempting to register node" node="functional-135520"
	Oct 06 14:40:31 functional-135520 kubelet[14966]: E1006 14:40:31.764871   14966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-135520"
	Oct 06 14:40:33 functional-135520 kubelet[14966]: E1006 14:40:33.980503   14966 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:40:34 functional-135520 kubelet[14966]: E1006 14:40:34.007644   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:40:34 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:34 functional-135520 kubelet[14966]:  > podSandboxID="526b997044ad8cc54e45aef5a5faa2edaadc9cabbedd2784eaded2bd6562135f"
	Oct 06 14:40:34 functional-135520 kubelet[14966]: E1006 14:40:34.007745   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:34 functional-135520 kubelet[14966]:         container kube-scheduler start failed in pod kube-scheduler-functional-135520_kube-system(5115bd1eba9594a3f2b99b5d6a4b9d59): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:34 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:34 functional-135520 kubelet[14966]: E1006 14:40:34.007777   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-135520" podUID="5115bd1eba9594a3f2b99b5d6a4b9d59"
	Oct 06 14:40:36 functional-135520 kubelet[14966]: E1006 14:40:36.021610   14966 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-135520.186beda7023a08f5  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-135520,UID:functional-135520,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-135520 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-135520,},FirstTimestamp:2025-10-06 14:36:20.970989813 +0000 UTC m=+1.419813170,LastTimestamp:2025-10-06 14:36:20.970989813 +0000 UTC m=+1.419813170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-135520,}"
	Oct 06 14:40:36 functional-135520 kubelet[14966]: E1006 14:40:36.228685   14966 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 06 14:40:38 functional-135520 kubelet[14966]: E1006 14:40:38.603588   14966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-135520?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 06 14:40:38 functional-135520 kubelet[14966]: I1006 14:40:38.766620   14966 kubelet_node_status.go:75] "Attempting to register node" node="functional-135520"
	Oct 06 14:40:38 functional-135520 kubelet[14966]: E1006 14:40:38.766986   14966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-135520"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520: exit status 2 (344.376347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-135520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (2.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-135520 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-135520 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (53.616829ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-135520 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-135520 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-135520 describe po hello-node-connect: exit status 1 (57.262848ms)

                                                
                                                
** stderr ** 
	E1006 14:40:34.696045  675246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:34.696449  675246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:34.698754  675246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:34.699419  675246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:34.700841  675246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-135520 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-135520 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-135520 logs -l app=hello-node-connect: exit status 1 (50.406636ms)

                                                
                                                
** stderr ** 
	E1006 14:40:34.748015  675282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:34.748360  675282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:34.749829  675282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:34.750095  675282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-135520 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-135520 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-135520 describe svc hello-node-connect: exit status 1 (50.88831ms)

                                                
                                                
** stderr ** 
	E1006 14:40:34.798144  675298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:34.799141  675298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:34.799654  675298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:34.801091  675298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:34.801415  675298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-135520 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-135520
helpers_test.go:243: (dbg) docker inspect functional-135520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	        "Created": "2025-10-06T14:13:32.283355011Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:13:32.318096257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hosts",
	        "LogPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20-json.log",
	        "Name": "/functional-135520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-135520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-135520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	                "LowerDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-135520",
	                "Source": "/var/lib/docker/volumes/functional-135520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-135520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-135520",
	                "name.minikube.sigs.k8s.io": "functional-135520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6368ffca3e5840f94a34614c511d9f0a0a4ca0d05de4fe1f94c8bfdc332f1a62",
	            "SandboxKey": "/var/run/docker/netns/6368ffca3e58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-135520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d1:94:25:38:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f712be59dd18dac98bed5f234c9f77a39e85277143d6f46285adcd3b0185d552",
	                    "EndpointID": "b816964b653b1b5116e3262dfdc87af272931013ef5b9e2714c9ff7357118a6f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-135520",
	                        "3dd9a226ea42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520: exit status 2 (309.487115ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-135520 logs -n 25: (1.042431106s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ tunnel  │ functional-135520 tunnel --alsologtostderr                                                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image   │ functional-135520 image load --daemon kicbase/echo-server:functional-135520 --alsologtostderr                                                                   │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ tunnel  │ functional-135520 tunnel --alsologtostderr                                                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ addons  │ functional-135520 addons list                                                                                                                                   │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ addons  │ functional-135520 addons list -o json                                                                                                                           │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image ls                                                                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ mount   │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdany-port2160266487/001:/mount-9p --alsologtostderr -v=1                                                 │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ ssh     │ functional-135520 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ ssh     │ functional-135520 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image load --daemon kicbase/echo-server:functional-135520 --alsologtostderr                                                                   │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh -- ls -la /mount-9p                                                                                                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh cat /mount-9p/test-1759761631098316341                                                                                                    │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image   │ functional-135520 image ls                                                                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image save kicbase/echo-server:functional-135520 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image rm kicbase/echo-server:functional-135520 --alsologtostderr                                                                              │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ mount   │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdspecific-port2551281271/001:/mount-9p --alsologtostderr -v=1 --port 46464                               │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ ssh     │ functional-135520 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image   │ functional-135520 image ls                                                                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image save --daemon kicbase/echo-server:functional-135520 --alsologtostderr                                                                   │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh -- ls -la /mount-9p                                                                                                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:28:06
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:28:06.515575  656123 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:28:06.515775  656123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:28:06.515777  656123 out.go:374] Setting ErrFile to fd 2...
	I1006 14:28:06.515780  656123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:28:06.515998  656123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:28:06.516461  656123 out.go:368] Setting JSON to false
	I1006 14:28:06.517416  656123 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18622,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:28:06.517495  656123 start.go:140] virtualization: kvm guest
	I1006 14:28:06.519514  656123 out.go:179] * [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:28:06.520800  656123 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:28:06.520851  656123 notify.go:220] Checking for updates...
	I1006 14:28:06.523025  656123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:28:06.524163  656123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:28:06.525184  656123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:28:06.526184  656123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:28:06.527199  656123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:28:06.528788  656123 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:28:06.528884  656123 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:28:06.553892  656123 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:28:06.554005  656123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:28:06.610913  656123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-06 14:28:06.599957285 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:28:06.611014  656123 docker.go:318] overlay module found
	I1006 14:28:06.612730  656123 out.go:179] * Using the docker driver based on existing profile
	I1006 14:28:06.613792  656123 start.go:304] selected driver: docker
	I1006 14:28:06.613801  656123 start.go:924] validating driver "docker" against &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:28:06.613876  656123 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:28:06.613960  656123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:28:06.672658  656123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-06 14:28:06.663055015 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:28:06.673343  656123 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:28:06.673382  656123 cni.go:84] Creating CNI manager for ""
	I1006 14:28:06.673449  656123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:28:06.673491  656123 start.go:348] cluster config:
	{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:28:06.675542  656123 out.go:179] * Starting "functional-135520" primary control-plane node in "functional-135520" cluster
	I1006 14:28:06.676765  656123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:28:06.678012  656123 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:28:06.679109  656123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:28:06.679148  656123 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:28:06.679171  656123 cache.go:58] Caching tarball of preloaded images
	I1006 14:28:06.679229  656123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:28:06.679315  656123 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:28:06.679322  656123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:28:06.679424  656123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/config.json ...
	I1006 14:28:06.701440  656123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:28:06.701451  656123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:28:06.701470  656123 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:28:06.701500  656123 start.go:360] acquireMachinesLock for functional-135520: {Name:mk634323c4619e77647ac9d9aaca492e399526ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:28:06.701582  656123 start.go:364] duration metric: took 55.883µs to acquireMachinesLock for "functional-135520"
	I1006 14:28:06.701608  656123 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:28:06.701614  656123 fix.go:54] fixHost starting: 
	I1006 14:28:06.701815  656123 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:28:06.719582  656123 fix.go:112] recreateIfNeeded on functional-135520: state=Running err=<nil>
	W1006 14:28:06.719608  656123 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:28:06.721479  656123 out.go:252] * Updating the running docker "functional-135520" container ...
	I1006 14:28:06.721509  656123 machine.go:93] provisionDockerMachine start ...
	I1006 14:28:06.721596  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:06.739776  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:06.740016  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:06.740022  656123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:28:06.883328  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:28:06.883355  656123 ubuntu.go:182] provisioning hostname "functional-135520"
	I1006 14:28:06.883416  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:06.901008  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:06.901274  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:06.901282  656123 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-135520 && echo "functional-135520" | sudo tee /etc/hostname
	I1006 14:28:07.054829  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:28:07.054893  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.073103  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:07.073400  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:07.073412  656123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-135520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-135520/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-135520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:28:07.218044  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:28:07.218064  656123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:28:07.218086  656123 ubuntu.go:190] setting up certificates
	I1006 14:28:07.218097  656123 provision.go:84] configureAuth start
	I1006 14:28:07.218147  656123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:28:07.235320  656123 provision.go:143] copyHostCerts
	I1006 14:28:07.235375  656123 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:28:07.235390  656123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:28:07.235462  656123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:28:07.235557  656123 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:28:07.235561  656123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:28:07.235585  656123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:28:07.235653  656123 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:28:07.235656  656123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:28:07.235685  656123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:28:07.235742  656123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.functional-135520 san=[127.0.0.1 192.168.49.2 functional-135520 localhost minikube]
	I1006 14:28:07.452963  656123 provision.go:177] copyRemoteCerts
	I1006 14:28:07.453021  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:28:07.453058  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.470979  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:07.572166  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:28:07.589268  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 14:28:07.606864  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:28:07.624012  656123 provision.go:87] duration metric: took 405.903097ms to configureAuth
	I1006 14:28:07.624031  656123 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:28:07.624198  656123 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:28:07.624358  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.642129  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:07.642348  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:07.642358  656123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:28:07.930562  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:28:07.930579  656123 machine.go:96] duration metric: took 1.209063221s to provisionDockerMachine
	I1006 14:28:07.930589  656123 start.go:293] postStartSetup for "functional-135520" (driver="docker")
	I1006 14:28:07.930598  656123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:28:07.930651  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:28:07.930687  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.948006  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.049510  656123 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:28:08.053027  656123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:28:08.053042  656123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:28:08.053061  656123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:28:08.053110  656123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:28:08.053177  656123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:28:08.053267  656123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> hosts in /etc/test/nested/copy/629719
	I1006 14:28:08.053298  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/629719
	I1006 14:28:08.060796  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:28:08.077999  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts --> /etc/test/nested/copy/629719/hosts (40 bytes)
	I1006 14:28:08.094766  656123 start.go:296] duration metric: took 164.165544ms for postStartSetup
	I1006 14:28:08.094821  656123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:28:08.094852  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:08.112238  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.210200  656123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:28:08.214744  656123 fix.go:56] duration metric: took 1.513121746s for fixHost
	I1006 14:28:08.214763  656123 start.go:83] releasing machines lock for "functional-135520", held for 1.513172056s
	I1006 14:28:08.214831  656123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:28:08.231996  656123 ssh_runner.go:195] Run: cat /version.json
	I1006 14:28:08.232006  656123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:28:08.232033  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:08.232059  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:08.250015  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.250313  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.415268  656123 ssh_runner.go:195] Run: systemctl --version
	I1006 14:28:08.422068  656123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:28:08.458421  656123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:28:08.463104  656123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:28:08.463164  656123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:28:08.471006  656123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:28:08.471018  656123 start.go:495] detecting cgroup driver to use...
	I1006 14:28:08.471045  656123 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:28:08.471088  656123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:28:08.485271  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:28:08.496859  656123 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:28:08.496895  656123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:28:08.510507  656123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:28:08.522301  656123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:28:08.600902  656123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:28:08.681762  656123 docker.go:234] disabling docker service ...
	I1006 14:28:08.681827  656123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:28:08.696663  656123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:28:08.708614  656123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:28:08.788151  656123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:28:08.872163  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:28:08.884753  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:28:08.898897  656123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:28:08.898940  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.907545  656123 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:28:08.907597  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.916027  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.924428  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.932498  656123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:28:08.939984  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.948324  656123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.956705  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.964969  656123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:28:08.971804  656123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:28:08.978693  656123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:28:09.061389  656123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:28:09.170335  656123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:28:09.170401  656123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:28:09.174497  656123 start.go:563] Will wait 60s for crictl version
	I1006 14:28:09.174546  656123 ssh_runner.go:195] Run: which crictl
	I1006 14:28:09.177947  656123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:28:09.201915  656123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:28:09.201972  656123 ssh_runner.go:195] Run: crio --version
	I1006 14:28:09.230589  656123 ssh_runner.go:195] Run: crio --version
	I1006 14:28:09.260606  656123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:28:09.261947  656123 cli_runner.go:164] Run: docker network inspect functional-135520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:28:09.278672  656123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:28:09.284367  656123 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1006 14:28:09.285382  656123 kubeadm.go:883] updating cluster {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:28:09.285546  656123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:28:09.285603  656123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:28:09.318027  656123 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:28:09.318039  656123 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:28:09.318088  656123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:28:09.342904  656123 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:28:09.342917  656123 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:28:09.342923  656123 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1006 14:28:09.343012  656123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-135520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:28:09.343066  656123 ssh_runner.go:195] Run: crio config
	I1006 14:28:09.388889  656123 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1006 14:28:09.388909  656123 cni.go:84] Creating CNI manager for ""
	I1006 14:28:09.388921  656123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:28:09.388932  656123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:28:09.388955  656123 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-135520 NodeName:functional-135520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:28:09.389087  656123 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-135520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:28:09.389140  656123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:28:09.397400  656123 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:28:09.397454  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:28:09.404846  656123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 14:28:09.416672  656123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:28:09.428910  656123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1006 14:28:09.440961  656123 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:28:09.444714  656123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:28:09.533656  656123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:28:09.546185  656123 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520 for IP: 192.168.49.2
	I1006 14:28:09.546197  656123 certs.go:195] generating shared ca certs ...
	I1006 14:28:09.546290  656123 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:28:09.546440  656123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:28:09.546475  656123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:28:09.546482  656123 certs.go:257] generating profile certs ...
	I1006 14:28:09.546559  656123 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key
	I1006 14:28:09.546594  656123 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key.72a46e8e
	I1006 14:28:09.546623  656123 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key
	I1006 14:28:09.546728  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:28:09.546750  656123 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:28:09.546756  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:28:09.546775  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:28:09.546793  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:28:09.546809  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:28:09.546841  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:28:09.547453  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:28:09.564638  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:28:09.581181  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:28:09.597600  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:28:09.614361  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:28:09.630631  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:28:09.647147  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:28:09.663361  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:28:09.679821  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:28:09.696763  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:28:09.713335  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:28:09.729791  656123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:28:09.741445  656123 ssh_runner.go:195] Run: openssl version
	I1006 14:28:09.747314  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:28:09.755183  656123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:28:09.758724  656123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:28:09.758757  656123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:28:09.792226  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:28:09.799947  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:28:09.808163  656123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:28:09.811711  656123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:28:09.811747  656123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:28:09.845740  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:28:09.854138  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:28:09.862651  656123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:28:09.866319  656123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:28:09.866364  656123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:28:09.900583  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:28:09.908997  656123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:28:09.912812  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:28:09.946819  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:28:09.981139  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:28:10.015748  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:28:10.049705  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:28:10.084715  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:28:10.119782  656123 kubeadm.go:400] StartCluster: {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:28:10.119890  656123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:28:10.119973  656123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:28:10.149719  656123 cri.go:89] found id: ""
	I1006 14:28:10.149774  656123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:28:10.158129  656123 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:28:10.158143  656123 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:28:10.158217  656123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:28:10.166324  656123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.166847  656123 kubeconfig.go:125] found "functional-135520" server: "https://192.168.49.2:8441"
	I1006 14:28:10.168240  656123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:28:10.175929  656123 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-06 14:13:37.047601698 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-06 14:28:09.438461717 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1006 14:28:10.175939  656123 kubeadm.go:1160] stopping kube-system containers ...
	I1006 14:28:10.175953  656123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1006 14:28:10.175996  656123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:28:10.204289  656123 cri.go:89] found id: ""
	I1006 14:28:10.204358  656123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1006 14:28:10.246949  656123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:28:10.255460  656123 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  6 14:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  6 14:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  6 14:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  6 14:17 /etc/kubernetes/scheduler.conf
	
	I1006 14:28:10.255526  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:28:10.263528  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:28:10.271432  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.271482  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:28:10.278844  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:28:10.286462  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.286516  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:28:10.293960  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:28:10.301358  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.301414  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:28:10.308882  656123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:28:10.316879  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:10.360770  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.195064  656123 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.834266287s)
	I1006 14:28:12.195115  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.367120  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.417483  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.470408  656123 api_server.go:52] waiting for apiserver process to appear ...
	I1006 14:28:12.470467  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:12.971496  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:13.471359  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:13.971266  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:14.470628  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:14.970727  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:15.470821  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:15.971537  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:16.470947  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:16.970796  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:17.471324  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:17.970807  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:18.471451  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:18.970803  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:19.471285  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:19.970529  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:20.471499  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:20.971288  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:21.471188  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:21.971466  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:22.471502  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:22.971321  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:23.471284  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:23.970994  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:24.470729  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:24.971445  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:25.470644  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:25.970962  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:26.471442  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:26.971311  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:27.470610  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:27.970961  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:28.470640  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:28.971300  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:29.470626  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:29.971278  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:30.471158  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:30.970980  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:31.470603  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:31.971449  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:32.471177  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:32.970617  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:33.471419  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:33.970722  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:34.471271  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:34.970652  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:35.470921  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:35.971492  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:36.470973  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:36.971256  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:37.471394  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:37.970703  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:38.470961  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:38.970907  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:39.471451  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:39.970850  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:40.471304  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:40.971524  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:41.470744  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:41.971222  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:42.471463  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:42.970604  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:43.470720  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:43.970989  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:44.470818  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:44.970672  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:45.470866  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:45.970683  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:46.471245  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:46.970914  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:47.471423  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:47.971442  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:48.470948  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:48.971501  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:49.471382  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:49.970705  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:50.471271  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:50.971251  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:51.471164  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:51.971336  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:52.471372  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:52.970578  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:53.471263  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:53.971000  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:54.471313  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:54.970838  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:55.470657  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:55.970901  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:56.470732  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:56.971609  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:57.470670  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:57.971054  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:58.470843  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:58.971017  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:59.471644  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:59.970666  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:00.471498  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:00.970805  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:01.471435  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:01.970733  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:02.470885  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:02.970839  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:03.470540  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:03.970872  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:04.470727  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:04.970673  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:05.471322  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:05.970626  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:06.470920  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:06.970887  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:07.471415  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:07.970944  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:08.470610  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:08.971309  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:09.470706  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:09.971450  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:10.471425  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:10.971283  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:11.470937  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:11.970687  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:12.471591  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:12.471676  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:12.498988  656123 cri.go:89] found id: ""
	I1006 14:29:12.499014  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.499021  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:12.499026  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:12.499080  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:12.526057  656123 cri.go:89] found id: ""
	I1006 14:29:12.526074  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.526080  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:12.526085  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:12.526164  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:12.553395  656123 cri.go:89] found id: ""
	I1006 14:29:12.553415  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.553426  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:12.553433  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:12.553486  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:12.580815  656123 cri.go:89] found id: ""
	I1006 14:29:12.580836  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.580846  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:12.580870  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:12.580931  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:12.607516  656123 cri.go:89] found id: ""
	I1006 14:29:12.607533  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.607539  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:12.607544  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:12.607607  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:12.634248  656123 cri.go:89] found id: ""
	I1006 14:29:12.634265  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.634272  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:12.634279  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:12.634335  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:12.660860  656123 cri.go:89] found id: ""
	I1006 14:29:12.660876  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.660883  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:12.660893  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:12.660905  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:12.731400  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:12.731425  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:12.745150  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:12.745174  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:12.803068  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:12.795122    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.795709    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797425    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797887    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.799415    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:12.795122    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.795709    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797425    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797887    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.799415    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:12.803085  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:12.803098  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:12.870066  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:12.870091  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:15.401709  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:15.412675  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:15.412725  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:15.438239  656123 cri.go:89] found id: ""
	I1006 14:29:15.438255  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.438264  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:15.438270  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:15.438322  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:15.463684  656123 cri.go:89] found id: ""
	I1006 14:29:15.463701  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.463709  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:15.463715  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:15.463769  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:15.488259  656123 cri.go:89] found id: ""
	I1006 14:29:15.488276  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.488284  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:15.488289  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:15.488347  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:15.514676  656123 cri.go:89] found id: ""
	I1006 14:29:15.514692  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.514699  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:15.514704  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:15.514762  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:15.540755  656123 cri.go:89] found id: ""
	I1006 14:29:15.540770  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.540776  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:15.540781  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:15.540832  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:15.565570  656123 cri.go:89] found id: ""
	I1006 14:29:15.565588  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.565598  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:15.565604  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:15.565651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:15.591845  656123 cri.go:89] found id: ""
	I1006 14:29:15.591860  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.591876  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:15.591885  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:15.591895  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:15.605051  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:15.605069  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:15.662500  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:15.655240    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.655743    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657283    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657783    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.659338    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:15.655240    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.655743    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657283    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657783    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.659338    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:15.662517  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:15.662531  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:15.727404  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:15.727424  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:15.756261  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:15.756279  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:18.330899  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:18.342312  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:18.342369  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:18.367886  656123 cri.go:89] found id: ""
	I1006 14:29:18.367902  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.367912  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:18.367919  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:18.367967  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:18.394659  656123 cri.go:89] found id: ""
	I1006 14:29:18.394676  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.394685  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:18.394691  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:18.394752  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:18.420739  656123 cri.go:89] found id: ""
	I1006 14:29:18.420762  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.420773  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:18.420780  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:18.420844  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:18.446534  656123 cri.go:89] found id: ""
	I1006 14:29:18.446553  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.446560  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:18.446565  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:18.446610  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:18.474847  656123 cri.go:89] found id: ""
	I1006 14:29:18.474867  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.474876  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:18.474882  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:18.474940  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:18.500739  656123 cri.go:89] found id: ""
	I1006 14:29:18.500755  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.500762  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:18.500767  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:18.500817  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:18.526704  656123 cri.go:89] found id: ""
	I1006 14:29:18.526720  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.526726  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:18.526735  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:18.526749  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:18.594578  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:18.594601  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:18.608090  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:18.608110  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:18.665980  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:18.658366    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.658897    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660516    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660915    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.662586    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:18.658366    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.658897    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660516    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660915    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.662586    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:18.665999  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:18.666015  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:18.726769  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:18.726792  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:21.257561  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:21.269556  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:21.269611  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:21.295967  656123 cri.go:89] found id: ""
	I1006 14:29:21.295989  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.296000  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:21.296007  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:21.296062  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:21.323201  656123 cri.go:89] found id: ""
	I1006 14:29:21.323232  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.323240  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:21.323246  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:21.323297  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:21.352254  656123 cri.go:89] found id: ""
	I1006 14:29:21.352271  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.352277  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:21.352282  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:21.352343  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:21.380457  656123 cri.go:89] found id: ""
	I1006 14:29:21.380477  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.380486  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:21.380493  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:21.380559  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:21.408352  656123 cri.go:89] found id: ""
	I1006 14:29:21.408368  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.408375  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:21.408379  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:21.408435  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:21.434925  656123 cri.go:89] found id: ""
	I1006 14:29:21.434941  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.434948  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:21.434953  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:21.435001  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:21.462533  656123 cri.go:89] found id: ""
	I1006 14:29:21.462551  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.462560  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:21.462570  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:21.462587  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:21.532658  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:21.532682  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:21.547259  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:21.547286  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:21.605779  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:21.598199    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.598802    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600396    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600847    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.602071    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:21.598199    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.598802    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600396    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600847    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.602071    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:21.605799  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:21.605816  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:21.670469  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:21.670493  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:24.203350  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:24.214528  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:24.214576  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:24.241149  656123 cri.go:89] found id: ""
	I1006 14:29:24.241173  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.241182  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:24.241187  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:24.241259  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:24.267072  656123 cri.go:89] found id: ""
	I1006 14:29:24.267089  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.267099  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:24.267104  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:24.267157  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:24.292610  656123 cri.go:89] found id: ""
	I1006 14:29:24.292629  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.292639  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:24.292645  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:24.292694  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:24.318386  656123 cri.go:89] found id: ""
	I1006 14:29:24.318403  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.318409  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:24.318414  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:24.318471  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:24.344804  656123 cri.go:89] found id: ""
	I1006 14:29:24.344827  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.344837  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:24.344843  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:24.344893  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:24.372496  656123 cri.go:89] found id: ""
	I1006 14:29:24.372512  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.372518  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:24.372523  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:24.372569  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:24.397473  656123 cri.go:89] found id: ""
	I1006 14:29:24.397489  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.397495  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:24.397503  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:24.397514  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:24.460002  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:24.460024  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:24.492377  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:24.492394  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:24.558943  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:24.558960  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:24.572667  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:24.572685  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:24.631693  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:24.623841    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.624453    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626057    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626493    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.628013    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:24.623841    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.624453    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626057    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626493    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.628013    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:27.132387  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:27.143350  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:27.143429  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:27.169854  656123 cri.go:89] found id: ""
	I1006 14:29:27.169869  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.169877  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:27.169882  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:27.169930  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:27.196448  656123 cri.go:89] found id: ""
	I1006 14:29:27.196464  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.196471  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:27.196476  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:27.196522  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:27.223046  656123 cri.go:89] found id: ""
	I1006 14:29:27.223066  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.223075  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:27.223081  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:27.223147  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:27.249726  656123 cri.go:89] found id: ""
	I1006 14:29:27.249744  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.249751  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:27.249756  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:27.249810  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:27.277358  656123 cri.go:89] found id: ""
	I1006 14:29:27.277376  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.277391  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:27.277398  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:27.277468  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:27.303432  656123 cri.go:89] found id: ""
	I1006 14:29:27.303452  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.303461  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:27.303467  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:27.303524  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:27.330642  656123 cri.go:89] found id: ""
	I1006 14:29:27.330660  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.330666  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:27.330677  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:27.330692  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:27.360553  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:27.360570  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:27.428526  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:27.428550  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:27.442696  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:27.442720  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:27.500958  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:27.493064    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.493671    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495253    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495769    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.497273    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:27.493064    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.493671    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495253    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495769    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.497273    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:27.500983  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:27.500995  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:30.062974  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:30.074243  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:30.074297  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:30.101939  656123 cri.go:89] found id: ""
	I1006 14:29:30.101960  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.101967  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:30.101973  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:30.102021  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:30.130122  656123 cri.go:89] found id: ""
	I1006 14:29:30.130139  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.130145  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:30.130151  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:30.130229  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:30.157742  656123 cri.go:89] found id: ""
	I1006 14:29:30.157759  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.157767  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:30.157773  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:30.157830  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:30.184613  656123 cri.go:89] found id: ""
	I1006 14:29:30.184634  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.184641  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:30.184646  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:30.184696  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:30.212547  656123 cri.go:89] found id: ""
	I1006 14:29:30.212563  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.212577  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:30.212582  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:30.212631  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:30.240288  656123 cri.go:89] found id: ""
	I1006 14:29:30.240303  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.240310  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:30.240315  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:30.240365  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:30.267014  656123 cri.go:89] found id: ""
	I1006 14:29:30.267030  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.267038  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:30.267047  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:30.267062  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:30.280742  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:30.280768  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:30.340211  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:30.332660    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.333170    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.334689    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.335152    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.336640    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:30.332660    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.333170    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.334689    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.335152    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.336640    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:30.340244  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:30.340259  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:30.401294  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:30.401334  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:30.433250  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:30.433271  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:33.006726  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:33.018059  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:33.018122  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:33.045352  656123 cri.go:89] found id: ""
	I1006 14:29:33.045372  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.045380  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:33.045386  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:33.045436  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:33.072234  656123 cri.go:89] found id: ""
	I1006 14:29:33.072252  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.072260  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:33.072265  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:33.072315  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:33.100162  656123 cri.go:89] found id: ""
	I1006 14:29:33.100178  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.100185  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:33.100190  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:33.100258  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:33.128258  656123 cri.go:89] found id: ""
	I1006 14:29:33.128278  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.128288  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:33.128293  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:33.128342  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:33.155116  656123 cri.go:89] found id: ""
	I1006 14:29:33.155146  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.155153  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:33.155158  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:33.155226  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:33.183135  656123 cri.go:89] found id: ""
	I1006 14:29:33.183150  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.183156  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:33.183161  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:33.183243  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:33.209826  656123 cri.go:89] found id: ""
	I1006 14:29:33.209844  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.209851  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:33.209859  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:33.209870  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:33.276119  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:33.276145  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:33.289780  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:33.289805  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:33.346572  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:33.338882    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.339397    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341034    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341541    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.343088    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:33.338882    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.339397    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341034    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341541    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.343088    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:33.346592  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:33.346605  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:33.413643  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:33.413673  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:35.944641  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:35.955753  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:35.955806  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:35.981909  656123 cri.go:89] found id: ""
	I1006 14:29:35.981923  656123 logs.go:282] 0 containers: []
	W1006 14:29:35.981930  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:35.981935  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:35.981981  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:36.006585  656123 cri.go:89] found id: ""
	I1006 14:29:36.006605  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.006615  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:36.006621  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:36.006687  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:36.034185  656123 cri.go:89] found id: ""
	I1006 14:29:36.034211  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.034221  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:36.034228  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:36.034279  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:36.060600  656123 cri.go:89] found id: ""
	I1006 14:29:36.060618  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.060625  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:36.060630  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:36.060676  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:36.086928  656123 cri.go:89] found id: ""
	I1006 14:29:36.086945  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.086953  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:36.086957  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:36.087073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:36.112833  656123 cri.go:89] found id: ""
	I1006 14:29:36.112851  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.112875  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:36.112882  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:36.112944  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:36.139970  656123 cri.go:89] found id: ""
	I1006 14:29:36.139991  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.140002  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:36.140014  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:36.140030  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:36.153360  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:36.153383  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:36.209902  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:36.202455    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.202929    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.204558    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.205025    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.206599    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:36.202455    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.202929    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.204558    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.205025    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.206599    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:36.209916  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:36.209929  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:36.276242  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:36.276264  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:36.305135  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:36.305152  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:38.872573  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:38.884454  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:38.884512  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:38.911055  656123 cri.go:89] found id: ""
	I1006 14:29:38.911071  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.911076  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:38.911081  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:38.911142  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:38.937413  656123 cri.go:89] found id: ""
	I1006 14:29:38.937433  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.937441  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:38.937450  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:38.937529  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:38.963534  656123 cri.go:89] found id: ""
	I1006 14:29:38.963557  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.963564  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:38.963569  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:38.963619  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:38.989811  656123 cri.go:89] found id: ""
	I1006 14:29:38.989825  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.989831  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:38.989836  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:38.989882  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:39.016789  656123 cri.go:89] found id: ""
	I1006 14:29:39.016809  656123 logs.go:282] 0 containers: []
	W1006 14:29:39.016818  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:39.016824  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:39.016876  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:39.042392  656123 cri.go:89] found id: ""
	I1006 14:29:39.042407  656123 logs.go:282] 0 containers: []
	W1006 14:29:39.042413  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:39.042426  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:39.042473  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:39.068836  656123 cri.go:89] found id: ""
	I1006 14:29:39.068852  656123 logs.go:282] 0 containers: []
	W1006 14:29:39.068859  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:39.068867  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:39.068877  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:39.137663  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:39.137689  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:39.151471  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:39.151495  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:39.209176  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:39.201542    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.202107    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.203710    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.204183    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.205768    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:39.201542    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.202107    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.203710    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.204183    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.205768    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:39.209192  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:39.209218  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:39.274008  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:39.274031  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:41.804322  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:41.815323  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:41.815387  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:41.842055  656123 cri.go:89] found id: ""
	I1006 14:29:41.842070  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.842077  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:41.842082  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:41.842129  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:41.868733  656123 cri.go:89] found id: ""
	I1006 14:29:41.868750  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.868756  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:41.868762  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:41.868809  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:41.896710  656123 cri.go:89] found id: ""
	I1006 14:29:41.896732  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.896742  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:41.896750  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:41.896807  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:41.924854  656123 cri.go:89] found id: ""
	I1006 14:29:41.924875  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.924884  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:41.924891  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:41.924950  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:41.952359  656123 cri.go:89] found id: ""
	I1006 14:29:41.952376  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.952382  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:41.952387  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:41.952453  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:41.979613  656123 cri.go:89] found id: ""
	I1006 14:29:41.979629  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.979636  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:41.979640  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:41.979690  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:42.006904  656123 cri.go:89] found id: ""
	I1006 14:29:42.006923  656123 logs.go:282] 0 containers: []
	W1006 14:29:42.006931  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:42.006941  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:42.006953  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:42.020495  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:42.020518  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:42.078512  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:42.070746    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.071276    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.072881    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.073322    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.074846    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:42.070746    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.071276    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.072881    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.073322    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.074846    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:42.078528  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:42.078543  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:42.143410  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:42.143435  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:42.173024  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:42.173042  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:44.740873  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:44.751791  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:44.751852  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:44.777079  656123 cri.go:89] found id: ""
	I1006 14:29:44.777096  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.777103  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:44.777108  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:44.777158  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:44.802137  656123 cri.go:89] found id: ""
	I1006 14:29:44.802151  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.802158  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:44.802163  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:44.802227  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:44.827942  656123 cri.go:89] found id: ""
	I1006 14:29:44.827957  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.827964  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:44.827970  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:44.828014  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:44.853867  656123 cri.go:89] found id: ""
	I1006 14:29:44.853886  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.853894  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:44.853901  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:44.853956  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:44.879907  656123 cri.go:89] found id: ""
	I1006 14:29:44.879923  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.879931  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:44.879937  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:44.879994  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:44.905634  656123 cri.go:89] found id: ""
	I1006 14:29:44.905654  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.905663  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:44.905673  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:44.905731  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:44.932500  656123 cri.go:89] found id: ""
	I1006 14:29:44.932515  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.932524  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:44.932532  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:44.932543  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:44.960602  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:44.960619  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:45.030445  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:45.030474  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:45.043971  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:45.043991  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:45.101230  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:45.093566    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.094142    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.095685    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.096125    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.097721    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:45.093566    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.094142    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.095685    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.096125    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.097721    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:45.101246  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:45.101259  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:47.666091  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:47.677001  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:47.677061  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:47.703386  656123 cri.go:89] found id: ""
	I1006 14:29:47.703404  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.703412  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:47.703423  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:47.703482  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:47.729961  656123 cri.go:89] found id: ""
	I1006 14:29:47.729978  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.729985  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:47.729998  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:47.730046  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:47.757114  656123 cri.go:89] found id: ""
	I1006 14:29:47.757148  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.757155  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:47.757160  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:47.757220  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:47.783979  656123 cri.go:89] found id: ""
	I1006 14:29:47.783997  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.784004  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:47.784008  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:47.784054  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:47.809265  656123 cri.go:89] found id: ""
	I1006 14:29:47.809280  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.809287  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:47.809292  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:47.809337  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:47.834447  656123 cri.go:89] found id: ""
	I1006 14:29:47.834463  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.834470  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:47.834474  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:47.834518  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:47.860785  656123 cri.go:89] found id: ""
	I1006 14:29:47.860802  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.860808  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:47.860817  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:47.860827  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:47.928576  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:47.928600  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:47.942643  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:47.942669  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:48.000352  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:47.992403    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.992971    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.994566    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.995054    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.996597    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:47.992403    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.992971    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.994566    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.995054    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.996597    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:48.000373  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:48.000391  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:48.065612  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:48.065640  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:50.596504  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:50.607654  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:50.607709  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:50.634723  656123 cri.go:89] found id: ""
	I1006 14:29:50.634742  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.634751  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:50.634758  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:50.634821  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:50.662103  656123 cri.go:89] found id: ""
	I1006 14:29:50.662122  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.662152  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:50.662160  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:50.662232  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:50.688627  656123 cri.go:89] found id: ""
	I1006 14:29:50.688646  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.688653  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:50.688658  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:50.688719  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:50.715511  656123 cri.go:89] found id: ""
	I1006 14:29:50.715530  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.715540  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:50.715544  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:50.715608  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:50.742597  656123 cri.go:89] found id: ""
	I1006 14:29:50.742612  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.742619  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:50.742624  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:50.742671  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:50.769656  656123 cri.go:89] found id: ""
	I1006 14:29:50.769672  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.769679  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:50.769684  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:50.769740  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:50.797585  656123 cri.go:89] found id: ""
	I1006 14:29:50.797603  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.797611  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:50.797620  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:50.797631  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:50.811635  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:50.811664  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:50.870641  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:50.863296    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.863835    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865405    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865832    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.866946    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:50.863296    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.863835    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865405    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865832    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.866946    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:50.870652  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:50.870665  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:50.933617  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:50.933644  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:50.964985  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:50.965003  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:53.535109  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:53.545986  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:53.546039  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:53.571300  656123 cri.go:89] found id: ""
	I1006 14:29:53.571315  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.571322  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:53.571328  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:53.571373  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:53.597111  656123 cri.go:89] found id: ""
	I1006 14:29:53.597126  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.597132  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:53.597137  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:53.597188  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:53.621477  656123 cri.go:89] found id: ""
	I1006 14:29:53.621493  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.621500  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:53.621504  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:53.621550  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:53.647877  656123 cri.go:89] found id: ""
	I1006 14:29:53.647891  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.647898  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:53.647902  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:53.647947  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:53.673269  656123 cri.go:89] found id: ""
	I1006 14:29:53.673284  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.673291  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:53.673296  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:53.673356  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:53.698368  656123 cri.go:89] found id: ""
	I1006 14:29:53.698384  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.698390  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:53.698395  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:53.698446  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:53.724452  656123 cri.go:89] found id: ""
	I1006 14:29:53.724471  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.724481  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:53.724491  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:53.724507  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:53.790937  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:53.790959  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:53.804913  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:53.804929  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:53.862094  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:53.854344    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.854872    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856476    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856953    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.858577    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:53.854344    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.854872    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856476    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856953    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.858577    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:53.862111  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:53.862124  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:53.921847  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:53.921867  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:56.452775  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:56.464702  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:56.464760  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:56.491587  656123 cri.go:89] found id: ""
	I1006 14:29:56.491603  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.491609  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:56.491614  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:56.491662  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:56.517138  656123 cri.go:89] found id: ""
	I1006 14:29:56.517157  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.517166  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:56.517170  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:56.517243  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:56.542713  656123 cri.go:89] found id: ""
	I1006 14:29:56.542728  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.542735  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:56.542740  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:56.542787  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:56.568528  656123 cri.go:89] found id: ""
	I1006 14:29:56.568545  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.568554  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:56.568561  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:56.568616  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:56.593881  656123 cri.go:89] found id: ""
	I1006 14:29:56.593897  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.593904  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:56.593909  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:56.593957  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:56.618843  656123 cri.go:89] found id: ""
	I1006 14:29:56.618862  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.618869  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:56.618874  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:56.618931  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:56.644219  656123 cri.go:89] found id: ""
	I1006 14:29:56.644239  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.644249  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:56.644258  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:56.644270  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:56.701345  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:56.693737    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.694299    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.695864    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.696432    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.697961    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:56.693737    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.694299    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.695864    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.696432    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.697961    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:56.701372  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:56.701384  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:56.762071  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:56.762096  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:56.791634  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:56.791656  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:56.857469  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:56.857492  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:59.371748  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:59.383943  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:59.384004  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:59.411674  656123 cri.go:89] found id: ""
	I1006 14:29:59.411695  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.411703  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:59.411712  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:59.411829  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:59.438177  656123 cri.go:89] found id: ""
	I1006 14:29:59.438193  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.438200  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:59.438217  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:59.438276  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:59.467581  656123 cri.go:89] found id: ""
	I1006 14:29:59.467601  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.467611  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:59.467619  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:59.467682  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:59.496610  656123 cri.go:89] found id: ""
	I1006 14:29:59.496626  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.496633  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:59.496638  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:59.496684  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:59.523799  656123 cri.go:89] found id: ""
	I1006 14:29:59.523815  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.523822  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:59.523827  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:59.523889  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:59.550529  656123 cri.go:89] found id: ""
	I1006 14:29:59.550546  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.550553  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:59.550558  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:59.550606  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:59.577487  656123 cri.go:89] found id: ""
	I1006 14:29:59.577503  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.577509  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:59.577518  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:59.577529  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:59.607238  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:59.607260  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:59.676960  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:59.676986  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:59.690846  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:59.690869  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:59.749311  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:59.741475    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.742053    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.743670    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.744122    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.745515    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:59.741475    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.742053    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.743670    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.744122    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.745515    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:59.749329  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:59.749339  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:02.310264  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:02.321519  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:02.321570  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:02.347821  656123 cri.go:89] found id: ""
	I1006 14:30:02.347842  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.347852  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:02.347860  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:02.347920  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:02.373381  656123 cri.go:89] found id: ""
	I1006 14:30:02.373404  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.373412  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:02.373418  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:02.373462  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:02.401169  656123 cri.go:89] found id: ""
	I1006 14:30:02.401189  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.401199  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:02.401215  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:02.401271  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:02.427774  656123 cri.go:89] found id: ""
	I1006 14:30:02.427790  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.427799  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:02.427806  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:02.427858  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:02.453624  656123 cri.go:89] found id: ""
	I1006 14:30:02.453642  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.453652  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:02.453659  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:02.453725  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:02.480503  656123 cri.go:89] found id: ""
	I1006 14:30:02.480520  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.480526  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:02.480531  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:02.480581  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:02.506624  656123 cri.go:89] found id: ""
	I1006 14:30:02.506643  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.506652  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:02.506662  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:02.506675  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:02.575030  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:02.575055  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:02.589240  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:02.589266  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:02.647840  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:02.640193    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.640759    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642327    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642757    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.644424    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:02.640193    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.640759    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642327    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642757    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.644424    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:02.647855  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:02.647866  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:02.710907  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:02.710932  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:05.243556  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:05.254230  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:05.254287  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:05.279490  656123 cri.go:89] found id: ""
	I1006 14:30:05.279506  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.279514  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:05.279520  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:05.279572  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:05.305513  656123 cri.go:89] found id: ""
	I1006 14:30:05.305533  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.305539  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:05.305544  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:05.305591  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:05.331962  656123 cri.go:89] found id: ""
	I1006 14:30:05.331981  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.331990  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:05.331996  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:05.332058  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:05.357789  656123 cri.go:89] found id: ""
	I1006 14:30:05.357807  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.357815  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:05.357820  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:05.357866  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:05.383637  656123 cri.go:89] found id: ""
	I1006 14:30:05.383658  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.383664  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:05.383669  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:05.383715  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:05.408314  656123 cri.go:89] found id: ""
	I1006 14:30:05.408332  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.408341  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:05.408348  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:05.408418  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:05.433843  656123 cri.go:89] found id: ""
	I1006 14:30:05.433861  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.433867  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:05.433876  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:05.433888  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:05.494147  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:05.494176  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:05.523997  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:05.524016  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:05.591019  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:05.591039  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:05.604531  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:05.604546  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:05.660873  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:05.653677    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.654169    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.655684    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.656053    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.657599    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:05.653677    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.654169    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.655684    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.656053    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.657599    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:08.162635  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:08.173492  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:08.173538  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:08.199879  656123 cri.go:89] found id: ""
	I1006 14:30:08.199896  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.199902  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:08.199907  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:08.199954  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:08.225501  656123 cri.go:89] found id: ""
	I1006 14:30:08.225520  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.225531  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:08.225537  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:08.225598  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:08.251711  656123 cri.go:89] found id: ""
	I1006 14:30:08.251730  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.251737  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:08.251742  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:08.251790  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:08.277559  656123 cri.go:89] found id: ""
	I1006 14:30:08.277575  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.277584  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:08.277594  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:08.277656  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:08.303749  656123 cri.go:89] found id: ""
	I1006 14:30:08.303767  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.303776  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:08.303781  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:08.303830  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:08.329034  656123 cri.go:89] found id: ""
	I1006 14:30:08.329053  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.329059  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:08.329064  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:08.329111  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:08.354393  656123 cri.go:89] found id: ""
	I1006 14:30:08.354409  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.354416  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:08.354423  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:08.354434  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:08.416780  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:08.416799  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:08.444904  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:08.444925  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:08.518089  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:08.518111  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:08.531108  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:08.531124  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:08.586529  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:08.578762    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.579607    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581199    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581663    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.583179    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:08.578762    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.579607    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581199    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581663    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.583179    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:11.087318  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:11.098631  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:11.098701  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:11.125423  656123 cri.go:89] found id: ""
	I1006 14:30:11.125441  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.125450  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:11.125456  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:11.125520  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:11.154785  656123 cri.go:89] found id: ""
	I1006 14:30:11.154803  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.154810  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:11.154815  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:11.154868  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:11.180879  656123 cri.go:89] found id: ""
	I1006 14:30:11.180899  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.180908  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:11.180915  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:11.180979  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:11.207281  656123 cri.go:89] found id: ""
	I1006 14:30:11.207308  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.207318  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:11.207326  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:11.207391  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:11.234275  656123 cri.go:89] found id: ""
	I1006 14:30:11.234293  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.234302  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:11.234308  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:11.234379  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:11.261486  656123 cri.go:89] found id: ""
	I1006 14:30:11.261502  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.261508  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:11.261514  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:11.261561  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:11.287155  656123 cri.go:89] found id: ""
	I1006 14:30:11.287173  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.287180  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:11.287189  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:11.287223  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:11.358359  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:11.358383  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:11.372359  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:11.372385  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:11.430998  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:11.423269    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.423805    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425394    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425911    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.427479    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:11.423269    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.423805    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425394    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425911    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.427479    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:11.431012  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:11.431023  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:11.498514  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:11.498538  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:14.030847  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:14.041715  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:14.041763  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:14.067907  656123 cri.go:89] found id: ""
	I1006 14:30:14.067927  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.067938  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:14.067944  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:14.067992  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:14.093781  656123 cri.go:89] found id: ""
	I1006 14:30:14.093800  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.093810  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:14.093817  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:14.093873  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:14.120737  656123 cri.go:89] found id: ""
	I1006 14:30:14.120752  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.120759  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:14.120765  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:14.120825  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:14.148551  656123 cri.go:89] found id: ""
	I1006 14:30:14.148567  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.148575  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:14.148580  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:14.148632  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:14.174943  656123 cri.go:89] found id: ""
	I1006 14:30:14.174960  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.174965  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:14.174970  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:14.175032  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:14.201148  656123 cri.go:89] found id: ""
	I1006 14:30:14.201163  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.201172  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:14.201178  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:14.201245  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:14.228046  656123 cri.go:89] found id: ""
	I1006 14:30:14.228062  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.228068  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:14.228077  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:14.228087  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:14.300889  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:14.300914  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:14.314304  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:14.314326  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:14.370818  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:14.363282    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.363836    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365383    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365793    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.367329    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:14.363282    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.363836    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365383    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365793    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.367329    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:14.370827  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:14.370838  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:14.431681  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:14.431704  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:16.961397  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:16.973165  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:16.973247  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:17.001273  656123 cri.go:89] found id: ""
	I1006 14:30:17.001291  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.001297  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:17.001302  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:17.001354  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:17.027536  656123 cri.go:89] found id: ""
	I1006 14:30:17.027557  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.027565  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:17.027570  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:17.027622  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:17.054924  656123 cri.go:89] found id: ""
	I1006 14:30:17.054940  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.054947  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:17.054953  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:17.055000  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:17.083443  656123 cri.go:89] found id: ""
	I1006 14:30:17.083460  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.083467  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:17.083472  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:17.083522  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:17.111442  656123 cri.go:89] found id: ""
	I1006 14:30:17.111459  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.111467  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:17.111474  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:17.111530  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:17.138310  656123 cri.go:89] found id: ""
	I1006 14:30:17.138329  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.138338  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:17.138344  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:17.138393  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:17.166360  656123 cri.go:89] found id: ""
	I1006 14:30:17.166389  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.166400  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:17.166411  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:17.166427  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:17.238488  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:17.238516  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:17.252654  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:17.252688  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:17.312602  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:17.304484    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.305059    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.306672    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.307166    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.308768    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:17.304484    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.305059    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.306672    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.307166    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.308768    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:17.312623  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:17.312634  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:17.375185  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:17.375222  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:19.907611  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:19.918724  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:19.918776  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:19.945244  656123 cri.go:89] found id: ""
	I1006 14:30:19.945264  656123 logs.go:282] 0 containers: []
	W1006 14:30:19.945277  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:19.945285  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:19.945343  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:19.972919  656123 cri.go:89] found id: ""
	I1006 14:30:19.972939  656123 logs.go:282] 0 containers: []
	W1006 14:30:19.972949  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:19.972955  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:19.973008  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:19.999841  656123 cri.go:89] found id: ""
	I1006 14:30:19.999858  656123 logs.go:282] 0 containers: []
	W1006 14:30:19.999864  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:19.999870  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:19.999926  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:20.027271  656123 cri.go:89] found id: ""
	I1006 14:30:20.027290  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.027299  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:20.027306  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:20.027364  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:20.054297  656123 cri.go:89] found id: ""
	I1006 14:30:20.054313  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.054320  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:20.054325  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:20.054380  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:20.081354  656123 cri.go:89] found id: ""
	I1006 14:30:20.081374  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.081380  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:20.081386  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:20.081438  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:20.108256  656123 cri.go:89] found id: ""
	I1006 14:30:20.108273  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.108280  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:20.108289  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:20.108303  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:20.177476  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:20.177501  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:20.191396  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:20.191419  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:20.250424  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:20.242535    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.243129    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.244697    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.245110    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.246705    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:20.242535    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.243129    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.244697    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.245110    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.246705    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:20.250437  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:20.250448  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:20.311404  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:20.311430  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:22.842482  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:22.854386  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:22.854451  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:22.882144  656123 cri.go:89] found id: ""
	I1006 14:30:22.882160  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.882167  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:22.882176  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:22.882244  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:22.908078  656123 cri.go:89] found id: ""
	I1006 14:30:22.908097  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.908106  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:22.908112  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:22.908163  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:22.934596  656123 cri.go:89] found id: ""
	I1006 14:30:22.934613  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.934620  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:22.934624  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:22.934673  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:22.961803  656123 cri.go:89] found id: ""
	I1006 14:30:22.961821  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.961830  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:22.961837  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:22.961889  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:22.988277  656123 cri.go:89] found id: ""
	I1006 14:30:22.988293  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.988300  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:22.988305  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:22.988355  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:23.015411  656123 cri.go:89] found id: ""
	I1006 14:30:23.015428  656123 logs.go:282] 0 containers: []
	W1006 14:30:23.015436  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:23.015441  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:23.015494  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:23.042508  656123 cri.go:89] found id: ""
	I1006 14:30:23.042526  656123 logs.go:282] 0 containers: []
	W1006 14:30:23.042534  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:23.042545  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:23.042558  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:23.110932  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:23.110957  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:23.125294  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:23.125322  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:23.185388  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:23.177268    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.177825    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179508    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179961    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.181496    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:23.177268    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.177825    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179508    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179961    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.181496    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:23.185405  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:23.185418  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:23.246673  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:23.246696  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:25.778383  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:25.789490  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:25.789539  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:25.816713  656123 cri.go:89] found id: ""
	I1006 14:30:25.816731  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.816737  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:25.816742  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:25.816792  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:25.844676  656123 cri.go:89] found id: ""
	I1006 14:30:25.844699  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.844708  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:25.844716  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:25.844784  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:25.872027  656123 cri.go:89] found id: ""
	I1006 14:30:25.872046  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.872054  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:25.872059  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:25.872115  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:25.898454  656123 cri.go:89] found id: ""
	I1006 14:30:25.898473  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.898480  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:25.898486  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:25.898548  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:25.926559  656123 cri.go:89] found id: ""
	I1006 14:30:25.926576  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.926583  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:25.926589  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:25.926638  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:25.953516  656123 cri.go:89] found id: ""
	I1006 14:30:25.953535  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.953544  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:25.953562  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:25.953634  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:25.980962  656123 cri.go:89] found id: ""
	I1006 14:30:25.980978  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.980986  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:25.980994  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:25.981012  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:26.052486  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:26.052510  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:26.066688  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:26.066710  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:26.126899  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:26.118941    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.119633    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121265    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121767    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.123331    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:26.118941    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.119633    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121265    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121767    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.123331    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:26.126912  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:26.126924  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:26.187018  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:26.187047  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:28.721028  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:28.732295  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:28.732361  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:28.759561  656123 cri.go:89] found id: ""
	I1006 14:30:28.759583  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.759592  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:28.759598  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:28.759651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:28.787553  656123 cri.go:89] found id: ""
	I1006 14:30:28.787573  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.787584  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:28.787598  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:28.787653  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:28.813499  656123 cri.go:89] found id: ""
	I1006 14:30:28.813520  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.813529  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:28.813535  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:28.813591  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:28.840441  656123 cri.go:89] found id: ""
	I1006 14:30:28.840462  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.840468  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:28.840474  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:28.840523  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:28.867632  656123 cri.go:89] found id: ""
	I1006 14:30:28.867647  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.867654  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:28.867659  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:28.867709  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:28.895005  656123 cri.go:89] found id: ""
	I1006 14:30:28.895023  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.895029  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:28.895034  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:28.895082  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:28.920965  656123 cri.go:89] found id: ""
	I1006 14:30:28.920983  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.920993  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:28.921003  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:28.921017  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:28.981278  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:28.981302  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:29.010983  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:29.011000  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:29.078541  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:29.078565  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:29.092586  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:29.092613  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:29.151129  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:29.143937    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.144542    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146112    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146650    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.147708    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:29.143937    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.144542    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146112    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146650    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.147708    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:31.652214  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:31.663823  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:31.663891  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:31.690576  656123 cri.go:89] found id: ""
	I1006 14:30:31.690596  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.690606  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:31.690613  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:31.690666  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:31.716874  656123 cri.go:89] found id: ""
	I1006 14:30:31.716894  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.716902  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:31.716907  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:31.716956  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:31.744572  656123 cri.go:89] found id: ""
	I1006 14:30:31.744594  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.744603  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:31.744611  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:31.744681  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:31.771539  656123 cri.go:89] found id: ""
	I1006 14:30:31.771556  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.771565  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:31.771575  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:31.771637  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:31.798102  656123 cri.go:89] found id: ""
	I1006 14:30:31.798118  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.798125  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:31.798131  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:31.798175  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:31.825905  656123 cri.go:89] found id: ""
	I1006 14:30:31.825921  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.825928  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:31.825933  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:31.825985  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:31.853474  656123 cri.go:89] found id: ""
	I1006 14:30:31.853489  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.853496  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:31.853504  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:31.853515  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:31.925541  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:31.925566  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:31.939650  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:31.939676  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:31.998586  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:31.990853   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.991461   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.992961   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.993424   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.994933   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:31.990853   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.991461   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.992961   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.993424   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.994933   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:31.998595  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:31.998606  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:32.058322  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:32.058348  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:34.591129  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:34.602495  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:34.602545  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:34.628973  656123 cri.go:89] found id: ""
	I1006 14:30:34.628991  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.628998  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:34.629003  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:34.629048  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:34.654917  656123 cri.go:89] found id: ""
	I1006 14:30:34.654934  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.654941  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:34.654945  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:34.654997  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:34.680385  656123 cri.go:89] found id: ""
	I1006 14:30:34.680401  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.680408  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:34.680413  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:34.680459  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:34.705914  656123 cri.go:89] found id: ""
	I1006 14:30:34.705929  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.705935  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:34.705940  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:34.705989  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:34.731580  656123 cri.go:89] found id: ""
	I1006 14:30:34.731597  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.731604  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:34.731609  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:34.731661  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:34.756200  656123 cri.go:89] found id: ""
	I1006 14:30:34.756232  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.756239  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:34.756244  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:34.756293  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:34.781770  656123 cri.go:89] found id: ""
	I1006 14:30:34.781785  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.781794  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:34.781802  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:34.781813  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:34.850861  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:34.850884  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:34.864688  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:34.864706  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:34.921713  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:34.914358   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.914917   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916495   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916918   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.918459   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:34.914358   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.914917   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916495   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916918   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.918459   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:34.921723  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:34.921733  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:34.985884  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:34.985906  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:37.516053  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:37.526705  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:37.526751  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:37.551472  656123 cri.go:89] found id: ""
	I1006 14:30:37.551490  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.551500  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:37.551507  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:37.551561  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:37.576603  656123 cri.go:89] found id: ""
	I1006 14:30:37.576619  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.576626  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:37.576630  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:37.576674  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:37.602217  656123 cri.go:89] found id: ""
	I1006 14:30:37.602241  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.602250  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:37.602254  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:37.602300  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:37.627547  656123 cri.go:89] found id: ""
	I1006 14:30:37.627561  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.627567  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:37.627572  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:37.627614  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:37.652434  656123 cri.go:89] found id: ""
	I1006 14:30:37.652451  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.652460  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:37.652467  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:37.652519  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:37.677543  656123 cri.go:89] found id: ""
	I1006 14:30:37.677558  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.677564  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:37.677569  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:37.677611  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:37.701695  656123 cri.go:89] found id: ""
	I1006 14:30:37.701711  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.701718  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:37.701727  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:37.701737  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:37.730832  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:37.730852  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:37.799686  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:37.799708  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:37.813081  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:37.813106  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:37.869274  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:37.861812   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.862406   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.863958   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.864398   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.865877   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:37.861812   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.862406   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.863958   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.864398   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.865877   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:37.869285  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:37.869297  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:40.432488  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:40.443779  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:40.443830  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:40.471502  656123 cri.go:89] found id: ""
	I1006 14:30:40.471520  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.471528  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:40.471533  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:40.471591  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:40.498418  656123 cri.go:89] found id: ""
	I1006 14:30:40.498435  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.498442  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:40.498447  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:40.498495  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:40.525987  656123 cri.go:89] found id: ""
	I1006 14:30:40.526003  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.526009  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:40.526015  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:40.526073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:40.554161  656123 cri.go:89] found id: ""
	I1006 14:30:40.554180  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.554190  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:40.554197  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:40.554262  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:40.581168  656123 cri.go:89] found id: ""
	I1006 14:30:40.581186  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.581193  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:40.581198  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:40.581272  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:40.608862  656123 cri.go:89] found id: ""
	I1006 14:30:40.608879  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.608890  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:40.608899  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:40.608951  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:40.636053  656123 cri.go:89] found id: ""
	I1006 14:30:40.636069  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.636076  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:40.636084  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:40.636096  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:40.649832  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:40.649854  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:40.708143  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:40.700302   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.700800   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702328   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702794   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.704437   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:40.700302   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.700800   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702328   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702794   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.704437   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:40.708157  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:40.708173  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:40.767571  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:40.767598  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:40.798425  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:40.798447  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:43.369172  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:43.380275  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:43.380336  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:43.407137  656123 cri.go:89] found id: ""
	I1006 14:30:43.407166  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.407172  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:43.407178  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:43.407255  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:43.434264  656123 cri.go:89] found id: ""
	I1006 14:30:43.434280  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.434286  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:43.434291  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:43.434344  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:43.460492  656123 cri.go:89] found id: ""
	I1006 14:30:43.460511  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.460521  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:43.460527  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:43.460579  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:43.486096  656123 cri.go:89] found id: ""
	I1006 14:30:43.486112  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.486118  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:43.486123  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:43.486180  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:43.512166  656123 cri.go:89] found id: ""
	I1006 14:30:43.512182  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.512189  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:43.512200  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:43.512274  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:43.540182  656123 cri.go:89] found id: ""
	I1006 14:30:43.540198  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.540225  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:43.540231  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:43.540281  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:43.566257  656123 cri.go:89] found id: ""
	I1006 14:30:43.566276  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.566283  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:43.566291  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:43.566301  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:43.633282  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:43.633308  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:43.646525  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:43.646547  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:43.703245  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:43.695412   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.695958   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.697564   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.698089   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.699634   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:43.695412   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.695958   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.697564   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.698089   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.699634   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:43.703258  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:43.703271  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:43.763009  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:43.763030  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:46.294610  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:46.306608  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:46.306657  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:46.333990  656123 cri.go:89] found id: ""
	I1006 14:30:46.334010  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.334017  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:46.334023  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:46.334071  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:46.360169  656123 cri.go:89] found id: ""
	I1006 14:30:46.360186  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.360193  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:46.360197  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:46.360274  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:46.386526  656123 cri.go:89] found id: ""
	I1006 14:30:46.386543  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.386552  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:46.386559  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:46.386618  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:46.412732  656123 cri.go:89] found id: ""
	I1006 14:30:46.412755  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.412761  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:46.412768  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:46.412819  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:46.437943  656123 cri.go:89] found id: ""
	I1006 14:30:46.437961  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.437969  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:46.437975  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:46.438022  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:46.462227  656123 cri.go:89] found id: ""
	I1006 14:30:46.462245  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.462254  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:46.462259  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:46.462308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:46.486426  656123 cri.go:89] found id: ""
	I1006 14:30:46.486446  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.486455  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:46.486465  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:46.486478  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:46.555804  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:46.555824  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:46.568953  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:46.568977  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:46.625518  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:46.616895   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618433   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618998   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.620647   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.621154   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:46.616895   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618433   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618998   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.620647   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.621154   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:46.625532  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:46.625542  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:46.689026  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:46.689045  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:49.220452  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:49.231376  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:49.231437  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:49.257464  656123 cri.go:89] found id: ""
	I1006 14:30:49.257484  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.257492  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:49.257499  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:49.257549  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:49.282291  656123 cri.go:89] found id: ""
	I1006 14:30:49.282305  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.282315  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:49.282322  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:49.282374  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:49.307787  656123 cri.go:89] found id: ""
	I1006 14:30:49.307806  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.307815  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:49.307821  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:49.307872  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:49.333154  656123 cri.go:89] found id: ""
	I1006 14:30:49.333172  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.333179  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:49.333185  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:49.333252  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:49.359161  656123 cri.go:89] found id: ""
	I1006 14:30:49.359175  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.359183  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:49.359188  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:49.359253  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:49.385380  656123 cri.go:89] found id: ""
	I1006 14:30:49.385398  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.385405  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:49.385410  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:49.385461  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:49.409982  656123 cri.go:89] found id: ""
	I1006 14:30:49.410009  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.410020  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:49.410030  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:49.410043  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:49.470637  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:49.470662  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:49.498568  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:49.498585  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:49.568338  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:49.568355  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:49.581842  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:49.581863  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:49.638518  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:49.631016   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.631575   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633164   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633595   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.635088   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:49.631016   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.631575   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633164   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633595   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.635088   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:52.139121  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:52.151341  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:52.151400  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:52.180909  656123 cri.go:89] found id: ""
	I1006 14:30:52.180929  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.180937  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:52.180943  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:52.181004  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:52.212664  656123 cri.go:89] found id: ""
	I1006 14:30:52.212687  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.212695  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:52.212700  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:52.212753  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:52.242804  656123 cri.go:89] found id: ""
	I1006 14:30:52.242824  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.242833  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:52.242840  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:52.242906  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:52.275408  656123 cri.go:89] found id: ""
	I1006 14:30:52.275428  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.275437  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:52.275443  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:52.275511  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:52.304772  656123 cri.go:89] found id: ""
	I1006 14:30:52.304791  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.304797  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:52.304802  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:52.304855  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:52.334628  656123 cri.go:89] found id: ""
	I1006 14:30:52.334646  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.334665  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:52.334672  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:52.334744  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:52.363535  656123 cri.go:89] found id: ""
	I1006 14:30:52.363551  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.363558  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:52.363567  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:52.363578  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:52.395148  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:52.395172  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:52.467790  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:52.467818  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:52.483589  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:52.483613  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:52.547153  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:52.538900   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.539522   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541194   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541724   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.543496   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:52.538900   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.539522   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541194   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541724   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.543496   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:52.547168  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:52.547191  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:55.111539  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:55.123376  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:55.123432  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:55.151263  656123 cri.go:89] found id: ""
	I1006 14:30:55.151278  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.151285  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:55.151289  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:55.151354  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:55.179099  656123 cri.go:89] found id: ""
	I1006 14:30:55.179116  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.179123  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:55.179127  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:55.179177  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:55.207568  656123 cri.go:89] found id: ""
	I1006 14:30:55.207586  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.207594  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:55.207599  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:55.207653  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:55.236037  656123 cri.go:89] found id: ""
	I1006 14:30:55.236058  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.236068  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:55.236075  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:55.236132  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:55.263286  656123 cri.go:89] found id: ""
	I1006 14:30:55.263304  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.263311  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:55.263316  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:55.263416  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:55.291167  656123 cri.go:89] found id: ""
	I1006 14:30:55.291189  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.291197  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:55.291217  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:55.291271  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:55.318410  656123 cri.go:89] found id: ""
	I1006 14:30:55.318430  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.318440  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:55.318450  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:55.318461  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:55.385160  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:55.385187  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:55.399050  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:55.399076  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:55.458418  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:55.450518   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.451123   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.452726   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.453351   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.454908   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:55.450518   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.451123   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.452726   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.453351   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.454908   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:55.458432  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:55.458448  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:55.524792  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:55.524816  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:58.057888  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:58.068966  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:58.069020  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:58.096398  656123 cri.go:89] found id: ""
	I1006 14:30:58.096415  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.096423  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:58.096428  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:58.096477  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:58.123183  656123 cri.go:89] found id: ""
	I1006 14:30:58.123199  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.123218  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:58.123225  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:58.123278  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:58.149129  656123 cri.go:89] found id: ""
	I1006 14:30:58.149145  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.149152  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:58.149156  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:58.149231  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:58.176154  656123 cri.go:89] found id: ""
	I1006 14:30:58.176171  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.176178  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:58.176183  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:58.176260  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:58.202224  656123 cri.go:89] found id: ""
	I1006 14:30:58.202244  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.202252  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:58.202257  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:58.202308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:58.228701  656123 cri.go:89] found id: ""
	I1006 14:30:58.228722  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.228731  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:58.228738  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:58.228789  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:58.255405  656123 cri.go:89] found id: ""
	I1006 14:30:58.255424  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.255434  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:58.255445  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:58.255463  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:58.326378  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:58.326403  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:58.340088  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:58.340113  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:58.398424  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:58.390470   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.391705   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.392182   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.393789   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.394272   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:58.390470   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.391705   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.392182   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.393789   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.394272   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:58.398434  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:58.398444  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:58.458532  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:58.458557  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:00.988890  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:01.000117  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:01.000187  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:01.027975  656123 cri.go:89] found id: ""
	I1006 14:31:01.027994  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.028005  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:01.028011  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:01.028073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:01.057671  656123 cri.go:89] found id: ""
	I1006 14:31:01.057689  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.057695  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:01.057703  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:01.057753  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:01.086296  656123 cri.go:89] found id: ""
	I1006 14:31:01.086312  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.086319  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:01.086324  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:01.086380  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:01.115804  656123 cri.go:89] found id: ""
	I1006 14:31:01.115828  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.115838  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:01.115846  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:01.115914  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:01.143626  656123 cri.go:89] found id: ""
	I1006 14:31:01.143652  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.143662  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:01.143669  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:01.143730  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:01.173329  656123 cri.go:89] found id: ""
	I1006 14:31:01.173351  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.173358  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:01.173363  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:01.173425  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:01.202447  656123 cri.go:89] found id: ""
	I1006 14:31:01.202464  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.202472  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:01.202481  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:01.202493  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:01.264676  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:01.255680   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.256306   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.258878   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.259545   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.261098   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:01.255680   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.256306   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.258878   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.259545   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.261098   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:01.264688  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:01.264701  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:01.325726  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:01.325755  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:01.357935  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:01.357956  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:01.426320  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:01.426346  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:03.942695  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:03.954165  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:03.954257  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:03.982933  656123 cri.go:89] found id: ""
	I1006 14:31:03.982952  656123 logs.go:282] 0 containers: []
	W1006 14:31:03.982960  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:03.982966  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:03.983023  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:04.010750  656123 cri.go:89] found id: ""
	I1006 14:31:04.010768  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.010775  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:04.010780  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:04.010845  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:04.038408  656123 cri.go:89] found id: ""
	I1006 14:31:04.038430  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.038440  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:04.038446  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:04.038506  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:04.065987  656123 cri.go:89] found id: ""
	I1006 14:31:04.066004  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.066011  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:04.066017  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:04.066064  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:04.092615  656123 cri.go:89] found id: ""
	I1006 14:31:04.092635  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.092645  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:04.092651  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:04.092715  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:04.120296  656123 cri.go:89] found id: ""
	I1006 14:31:04.120314  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.120324  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:04.120331  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:04.120392  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:04.148258  656123 cri.go:89] found id: ""
	I1006 14:31:04.148275  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.148282  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:04.148291  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:04.148303  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:04.162693  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:04.162716  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:04.222565  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:04.214872   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.215499   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.216999   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.217486   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.218767   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:04.214872   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.215499   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.216999   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.217486   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.218767   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:04.222576  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:04.222588  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:04.284619  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:04.284645  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:04.315049  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:04.315067  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:06.880125  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:06.891035  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:06.891100  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:06.919022  656123 cri.go:89] found id: ""
	I1006 14:31:06.919039  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.919054  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:06.919059  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:06.919109  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:06.945007  656123 cri.go:89] found id: ""
	I1006 14:31:06.945023  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.945030  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:06.945035  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:06.945082  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:06.971114  656123 cri.go:89] found id: ""
	I1006 14:31:06.971140  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.971150  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:06.971156  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:06.971219  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:06.997325  656123 cri.go:89] found id: ""
	I1006 14:31:06.997341  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.997349  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:06.997354  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:06.997399  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:07.024483  656123 cri.go:89] found id: ""
	I1006 14:31:07.024503  656123 logs.go:282] 0 containers: []
	W1006 14:31:07.024510  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:07.024515  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:07.024563  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:07.050897  656123 cri.go:89] found id: ""
	I1006 14:31:07.050916  656123 logs.go:282] 0 containers: []
	W1006 14:31:07.050924  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:07.050929  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:07.050988  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:07.076681  656123 cri.go:89] found id: ""
	I1006 14:31:07.076698  656123 logs.go:282] 0 containers: []
	W1006 14:31:07.076706  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:07.076716  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:07.076730  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:07.137015  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:07.137039  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:07.167691  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:07.167711  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:07.236752  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:07.236774  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:07.250497  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:07.250519  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:07.307410  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:07.299651   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.300252   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.301817   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.302267   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.303782   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:07.299651   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.300252   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.301817   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.302267   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.303782   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:09.809076  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:09.819941  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:09.819991  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:09.847047  656123 cri.go:89] found id: ""
	I1006 14:31:09.847066  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.847075  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:09.847082  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:09.847151  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:09.873840  656123 cri.go:89] found id: ""
	I1006 14:31:09.873856  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.873862  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:09.873867  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:09.873923  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:09.900892  656123 cri.go:89] found id: ""
	I1006 14:31:09.900908  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.900914  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:09.900920  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:09.900967  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:09.927801  656123 cri.go:89] found id: ""
	I1006 14:31:09.927822  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.927835  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:09.927842  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:09.927892  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:09.955400  656123 cri.go:89] found id: ""
	I1006 14:31:09.955420  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.955428  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:09.955433  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:09.955484  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:09.981624  656123 cri.go:89] found id: ""
	I1006 14:31:09.981640  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.981647  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:09.981653  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:09.981700  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:10.009693  656123 cri.go:89] found id: ""
	I1006 14:31:10.009710  656123 logs.go:282] 0 containers: []
	W1006 14:31:10.009716  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:10.009724  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:10.009735  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:10.075460  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:10.075492  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:10.089300  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:10.089327  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:10.148123  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:10.140282   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.140860   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142433   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142866   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.144460   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:10.140282   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.140860   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142433   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142866   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.144460   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:10.148152  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:10.148165  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:10.210442  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:10.210473  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:12.742692  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:12.754226  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:12.754289  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:12.783228  656123 cri.go:89] found id: ""
	I1006 14:31:12.783249  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.783256  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:12.783263  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:12.783324  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:12.811693  656123 cri.go:89] found id: ""
	I1006 14:31:12.811715  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.811725  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:12.811732  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:12.811782  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:12.840310  656123 cri.go:89] found id: ""
	I1006 14:31:12.840332  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.840342  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:12.840348  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:12.840402  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:12.869101  656123 cri.go:89] found id: ""
	I1006 14:31:12.869123  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.869131  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:12.869137  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:12.869189  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:12.897605  656123 cri.go:89] found id: ""
	I1006 14:31:12.897623  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.897630  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:12.897635  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:12.897693  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:12.926227  656123 cri.go:89] found id: ""
	I1006 14:31:12.926247  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.926254  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:12.926260  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:12.926308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:12.955298  656123 cri.go:89] found id: ""
	I1006 14:31:12.955315  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.955324  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:12.955334  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:12.955348  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:13.021936  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:13.021962  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:13.036093  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:13.036115  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:13.096234  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:13.088298   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.088908   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090517   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090973   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.092543   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:13.088298   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.088908   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090517   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090973   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.092543   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:13.096246  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:13.096258  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:13.156934  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:13.156960  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:15.689959  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:15.701228  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:15.701301  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:15.727030  656123 cri.go:89] found id: ""
	I1006 14:31:15.727050  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.727059  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:15.727067  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:15.727119  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:15.753392  656123 cri.go:89] found id: ""
	I1006 14:31:15.753409  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.753417  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:15.753421  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:15.753471  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:15.780750  656123 cri.go:89] found id: ""
	I1006 14:31:15.780775  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.780783  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:15.780788  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:15.780842  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:15.807372  656123 cri.go:89] found id: ""
	I1006 14:31:15.807388  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.807401  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:15.807406  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:15.807461  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:15.834188  656123 cri.go:89] found id: ""
	I1006 14:31:15.834222  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.834233  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:15.834240  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:15.834293  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:15.861606  656123 cri.go:89] found id: ""
	I1006 14:31:15.861624  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.861631  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:15.861636  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:15.861702  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:15.888991  656123 cri.go:89] found id: ""
	I1006 14:31:15.889007  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.889014  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:15.889022  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:15.889035  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:15.956002  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:15.956024  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:15.969830  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:15.969850  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:16.026629  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:16.019009   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.019537   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021047   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021513   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.023044   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:16.019009   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.019537   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021047   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021513   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.023044   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:16.026643  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:16.026656  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:16.085192  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:16.085220  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:18.616289  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:18.627239  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:18.627304  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:18.655298  656123 cri.go:89] found id: ""
	I1006 14:31:18.655318  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.655327  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:18.655334  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:18.655392  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:18.682590  656123 cri.go:89] found id: ""
	I1006 14:31:18.682609  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.682616  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:18.682623  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:18.682684  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:18.709329  656123 cri.go:89] found id: ""
	I1006 14:31:18.709349  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.709359  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:18.709366  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:18.709428  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:18.735272  656123 cri.go:89] found id: ""
	I1006 14:31:18.735292  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.735302  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:18.735309  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:18.735370  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:18.761956  656123 cri.go:89] found id: ""
	I1006 14:31:18.761973  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.761980  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:18.761984  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:18.762047  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:18.788186  656123 cri.go:89] found id: ""
	I1006 14:31:18.788224  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.788234  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:18.788241  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:18.788293  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:18.814751  656123 cri.go:89] found id: ""
	I1006 14:31:18.814768  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.814775  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:18.814783  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:18.814793  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:18.874634  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:18.867140   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.867734   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869314   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869766   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.871291   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:18.867140   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.867734   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869314   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869766   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.871291   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:18.874645  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:18.874658  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:18.934741  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:18.934765  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:18.964835  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:18.964857  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:19.034348  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:19.034372  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:21.549097  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:21.560431  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:21.560497  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:21.588270  656123 cri.go:89] found id: ""
	I1006 14:31:21.588285  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.588292  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:21.588297  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:21.588352  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:21.615501  656123 cri.go:89] found id: ""
	I1006 14:31:21.615519  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.615527  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:21.615532  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:21.615590  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:21.643122  656123 cri.go:89] found id: ""
	I1006 14:31:21.643143  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.643150  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:21.643154  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:21.643222  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:21.670611  656123 cri.go:89] found id: ""
	I1006 14:31:21.670628  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.670635  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:21.670642  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:21.670705  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:21.698443  656123 cri.go:89] found id: ""
	I1006 14:31:21.698460  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.698467  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:21.698472  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:21.698521  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:21.726957  656123 cri.go:89] found id: ""
	I1006 14:31:21.726973  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.726981  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:21.726986  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:21.727032  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:21.754606  656123 cri.go:89] found id: ""
	I1006 14:31:21.754628  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.754638  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:21.754648  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:21.754661  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:21.814709  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:21.814731  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:21.846526  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:21.846543  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:21.915125  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:21.915156  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:21.929444  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:21.929482  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:21.988239  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:21.980740   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.981329   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.982927   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.983357   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.984775   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:21.980740   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.981329   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.982927   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.983357   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.984775   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:24.489339  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:24.500246  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:24.500303  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:24.527224  656123 cri.go:89] found id: ""
	I1006 14:31:24.527243  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.527253  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:24.527258  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:24.527309  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:24.552540  656123 cri.go:89] found id: ""
	I1006 14:31:24.552559  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.552567  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:24.552573  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:24.552636  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:24.581110  656123 cri.go:89] found id: ""
	I1006 14:31:24.581125  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.581131  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:24.581138  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:24.581201  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:24.607563  656123 cri.go:89] found id: ""
	I1006 14:31:24.607580  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.607588  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:24.607592  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:24.607649  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:24.633221  656123 cri.go:89] found id: ""
	I1006 14:31:24.633241  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.633249  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:24.633255  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:24.633303  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:24.658521  656123 cri.go:89] found id: ""
	I1006 14:31:24.658540  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.658547  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:24.658552  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:24.658611  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:24.684336  656123 cri.go:89] found id: ""
	I1006 14:31:24.684351  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.684358  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:24.684367  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:24.684381  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:24.743258  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:24.735488   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.735921   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.737653   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.738173   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.739491   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:24.735488   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.735921   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.737653   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.738173   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.739491   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:24.743270  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:24.743283  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:24.802373  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:24.802398  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:24.832699  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:24.832716  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:24.898746  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:24.898768  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:27.413617  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:27.424393  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:27.424454  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:27.452153  656123 cri.go:89] found id: ""
	I1006 14:31:27.452173  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.452181  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:27.452186  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:27.452268  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:27.477797  656123 cri.go:89] found id: ""
	I1006 14:31:27.477815  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.477822  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:27.477827  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:27.477881  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:27.502952  656123 cri.go:89] found id: ""
	I1006 14:31:27.502971  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.502978  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:27.502983  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:27.503039  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:27.529416  656123 cri.go:89] found id: ""
	I1006 14:31:27.529433  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.529440  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:27.529444  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:27.529504  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:27.554632  656123 cri.go:89] found id: ""
	I1006 14:31:27.554651  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.554659  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:27.554664  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:27.554713  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:27.580924  656123 cri.go:89] found id: ""
	I1006 14:31:27.580942  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.580948  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:27.580954  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:27.581007  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:27.605807  656123 cri.go:89] found id: ""
	I1006 14:31:27.605826  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.605836  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:27.605846  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:27.605860  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:27.618904  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:27.618922  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:27.677305  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:27.669937   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.670557   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672091   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672543   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.673638   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:27.669937   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.670557   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672091   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672543   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.673638   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:27.677315  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:27.677326  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:27.739103  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:27.739125  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:27.767028  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:27.767049  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:30.336333  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:30.348665  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:30.348724  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:30.377945  656123 cri.go:89] found id: ""
	I1006 14:31:30.377963  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.377973  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:30.377979  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:30.378035  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:30.406369  656123 cri.go:89] found id: ""
	I1006 14:31:30.406391  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.406400  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:30.406407  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:30.406484  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:30.435610  656123 cri.go:89] found id: ""
	I1006 14:31:30.435634  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.435644  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:30.435650  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:30.435715  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:30.464182  656123 cri.go:89] found id: ""
	I1006 14:31:30.464201  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.464222  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:30.464230  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:30.464285  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:30.493191  656123 cri.go:89] found id: ""
	I1006 14:31:30.493237  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.493254  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:30.493260  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:30.493313  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:30.522664  656123 cri.go:89] found id: ""
	I1006 14:31:30.522684  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.522695  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:30.522702  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:30.522762  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:30.553858  656123 cri.go:89] found id: ""
	I1006 14:31:30.553874  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.553880  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:30.553891  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:30.553905  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:30.625537  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:30.625563  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:30.641100  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:30.641127  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:30.705527  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:30.696933   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.697691   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699345   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699934   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.701560   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:30.696933   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.697691   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699345   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699934   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.701560   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:30.705543  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:30.705560  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:30.768236  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:30.768263  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:33.302531  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:33.314251  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:33.314308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:33.343374  656123 cri.go:89] found id: ""
	I1006 14:31:33.343394  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.343404  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:33.343411  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:33.343491  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:33.369870  656123 cri.go:89] found id: ""
	I1006 14:31:33.369885  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.369891  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:33.369895  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:33.369944  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:33.394611  656123 cri.go:89] found id: ""
	I1006 14:31:33.394631  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.394640  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:33.394647  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:33.394696  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:33.420323  656123 cri.go:89] found id: ""
	I1006 14:31:33.420338  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.420345  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:33.420350  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:33.420399  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:33.446454  656123 cri.go:89] found id: ""
	I1006 14:31:33.446483  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.446493  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:33.446501  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:33.446557  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:33.471998  656123 cri.go:89] found id: ""
	I1006 14:31:33.472013  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.472019  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:33.472025  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:33.472073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:33.498038  656123 cri.go:89] found id: ""
	I1006 14:31:33.498052  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.498058  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:33.498067  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:33.498077  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:33.554956  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:33.547323   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.547831   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549458   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549938   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.551501   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:33.547323   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.547831   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549458   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549938   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.551501   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:33.554967  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:33.554978  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:33.617723  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:33.617747  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:33.647466  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:33.647482  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:33.718107  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:33.718128  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:36.233955  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:36.245297  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:36.245362  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:36.272483  656123 cri.go:89] found id: ""
	I1006 14:31:36.272502  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.272509  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:36.272515  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:36.272574  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:36.299177  656123 cri.go:89] found id: ""
	I1006 14:31:36.299192  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.299199  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:36.299229  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:36.299284  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:36.325899  656123 cri.go:89] found id: ""
	I1006 14:31:36.325920  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.325938  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:36.325946  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:36.326000  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:36.353043  656123 cri.go:89] found id: ""
	I1006 14:31:36.353059  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.353065  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:36.353070  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:36.353117  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:36.379229  656123 cri.go:89] found id: ""
	I1006 14:31:36.379249  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.379259  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:36.379263  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:36.379320  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:36.407572  656123 cri.go:89] found id: ""
	I1006 14:31:36.407589  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.407596  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:36.407601  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:36.407651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:36.435005  656123 cri.go:89] found id: ""
	I1006 14:31:36.435022  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.435028  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:36.435036  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:36.435047  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:36.512293  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:36.512319  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:36.526942  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:36.526966  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:36.587325  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:36.579436   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.579991   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.581727   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.582244   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.583796   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:36.579436   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.579991   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.581727   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.582244   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.583796   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:36.587336  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:36.587349  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:36.648638  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:36.648672  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:39.181798  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:39.193122  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:39.193188  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:39.221286  656123 cri.go:89] found id: ""
	I1006 14:31:39.221304  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.221312  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:39.221317  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:39.221376  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:39.248422  656123 cri.go:89] found id: ""
	I1006 14:31:39.248437  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.248445  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:39.248450  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:39.248497  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:39.277291  656123 cri.go:89] found id: ""
	I1006 14:31:39.277308  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.277316  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:39.277322  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:39.277390  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:39.303982  656123 cri.go:89] found id: ""
	I1006 14:31:39.303999  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.304005  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:39.304011  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:39.304062  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:39.330654  656123 cri.go:89] found id: ""
	I1006 14:31:39.330674  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.330681  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:39.330686  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:39.330732  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:39.357141  656123 cri.go:89] found id: ""
	I1006 14:31:39.357156  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.357163  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:39.357168  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:39.357241  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:39.383968  656123 cri.go:89] found id: ""
	I1006 14:31:39.383986  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.383993  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:39.384002  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:39.384014  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:39.451579  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:39.451604  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:39.465454  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:39.465475  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:39.523259  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:39.515550   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.516185   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.517720   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.518181   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.519823   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:39.515550   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.516185   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.517720   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.518181   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.519823   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:39.523273  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:39.523285  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:39.585241  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:39.585265  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:42.115015  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:42.126583  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:42.126634  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:42.153385  656123 cri.go:89] found id: ""
	I1006 14:31:42.153406  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.153416  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:42.153422  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:42.153479  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:42.181021  656123 cri.go:89] found id: ""
	I1006 14:31:42.181039  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.181049  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:42.181055  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:42.181116  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:42.208104  656123 cri.go:89] found id: ""
	I1006 14:31:42.208123  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.208133  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:42.208139  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:42.208190  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:42.235099  656123 cri.go:89] found id: ""
	I1006 14:31:42.235115  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.235123  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:42.235128  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:42.235176  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:42.262052  656123 cri.go:89] found id: ""
	I1006 14:31:42.262072  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.262079  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:42.262084  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:42.262142  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:42.288093  656123 cri.go:89] found id: ""
	I1006 14:31:42.288111  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.288119  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:42.288124  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:42.288179  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:42.314049  656123 cri.go:89] found id: ""
	I1006 14:31:42.314068  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.314076  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:42.314087  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:42.314100  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:42.379866  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:42.379892  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:42.393937  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:42.393965  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:42.452376  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:42.444669   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.445228   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.446633   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.447200   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.448583   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:42.444669   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.445228   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.446633   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.447200   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.448583   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:42.452388  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:42.452400  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:42.513323  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:42.513346  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:45.045836  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:45.056587  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:45.056634  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:45.082895  656123 cri.go:89] found id: ""
	I1006 14:31:45.082913  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.082922  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:45.082930  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:45.082981  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:45.109560  656123 cri.go:89] found id: ""
	I1006 14:31:45.109579  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.109589  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:45.109595  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:45.109651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:45.136033  656123 cri.go:89] found id: ""
	I1006 14:31:45.136055  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.136065  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:45.136072  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:45.136145  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:45.162396  656123 cri.go:89] found id: ""
	I1006 14:31:45.162416  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.162423  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:45.162427  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:45.162493  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:45.188063  656123 cri.go:89] found id: ""
	I1006 14:31:45.188077  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.188084  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:45.188090  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:45.188135  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:45.214119  656123 cri.go:89] found id: ""
	I1006 14:31:45.214140  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.214150  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:45.214157  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:45.214234  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:45.242147  656123 cri.go:89] found id: ""
	I1006 14:31:45.242166  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.242176  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:45.242187  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:45.242201  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:45.311929  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:45.311952  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:45.324994  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:45.325015  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:45.381458  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:45.373267   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374021   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374992   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.376701   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.377102   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:45.373267   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374021   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374992   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.376701   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.377102   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:45.381470  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:45.381483  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:45.445634  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:45.445652  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:47.975088  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:47.986084  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:47.986144  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:48.013186  656123 cri.go:89] found id: ""
	I1006 14:31:48.013218  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.013229  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:48.013235  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:48.013289  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:48.039286  656123 cri.go:89] found id: ""
	I1006 14:31:48.039301  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.039308  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:48.039313  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:48.039361  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:48.065798  656123 cri.go:89] found id: ""
	I1006 14:31:48.065813  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.065821  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:48.065826  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:48.065873  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:48.091102  656123 cri.go:89] found id: ""
	I1006 14:31:48.091119  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.091128  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:48.091133  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:48.091188  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:48.117766  656123 cri.go:89] found id: ""
	I1006 14:31:48.117783  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.117790  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:48.117795  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:48.117844  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:48.144583  656123 cri.go:89] found id: ""
	I1006 14:31:48.144598  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.144604  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:48.144609  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:48.144655  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:48.171397  656123 cri.go:89] found id: ""
	I1006 14:31:48.171413  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.171421  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:48.171429  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:48.171440  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:48.232721  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:48.232743  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:48.262521  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:48.262540  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:48.332831  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:48.332851  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:48.346228  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:48.346247  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:48.402332  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:48.395067   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.395636   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397181   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397582   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.399142   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:48.395067   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.395636   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397181   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397582   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.399142   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:50.903091  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:50.914581  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:50.914643  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:50.940118  656123 cri.go:89] found id: ""
	I1006 14:31:50.940134  656123 logs.go:282] 0 containers: []
	W1006 14:31:50.940144  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:50.940152  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:50.940244  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:50.967927  656123 cri.go:89] found id: ""
	I1006 14:31:50.967942  656123 logs.go:282] 0 containers: []
	W1006 14:31:50.967950  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:50.967955  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:50.968012  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:50.994911  656123 cri.go:89] found id: ""
	I1006 14:31:50.994926  656123 logs.go:282] 0 containers: []
	W1006 14:31:50.994933  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:50.994938  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:50.994983  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:51.021349  656123 cri.go:89] found id: ""
	I1006 14:31:51.021367  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.021376  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:51.021381  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:51.021450  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:51.047856  656123 cri.go:89] found id: ""
	I1006 14:31:51.047875  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.047885  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:51.047892  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:51.047953  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:51.074984  656123 cri.go:89] found id: ""
	I1006 14:31:51.075002  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.075009  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:51.075014  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:51.075076  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:51.102644  656123 cri.go:89] found id: ""
	I1006 14:31:51.102660  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.102668  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:51.102677  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:51.102692  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:51.164842  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:51.164869  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:51.194272  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:51.194293  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:51.264785  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:51.264809  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:51.279283  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:51.279311  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:51.337565  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:51.329770   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.330346   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.331936   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.332399   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.334039   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:51.329770   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.330346   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.331936   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.332399   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.334039   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:53.839279  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:53.850387  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:53.850446  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:53.878099  656123 cri.go:89] found id: ""
	I1006 14:31:53.878125  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.878135  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:53.878142  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:53.878199  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:53.905974  656123 cri.go:89] found id: ""
	I1006 14:31:53.905994  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.906004  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:53.906011  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:53.906073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:53.934338  656123 cri.go:89] found id: ""
	I1006 14:31:53.934355  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.934362  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:53.934367  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:53.934417  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:53.961409  656123 cri.go:89] found id: ""
	I1006 14:31:53.961428  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.961436  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:53.961442  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:53.961492  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:53.988451  656123 cri.go:89] found id: ""
	I1006 14:31:53.988468  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.988475  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:53.988481  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:53.988541  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:54.015683  656123 cri.go:89] found id: ""
	I1006 14:31:54.015703  656123 logs.go:282] 0 containers: []
	W1006 14:31:54.015712  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:54.015718  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:54.015769  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:54.043179  656123 cri.go:89] found id: ""
	I1006 14:31:54.043196  656123 logs.go:282] 0 containers: []
	W1006 14:31:54.043215  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:54.043226  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:54.043242  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:54.107582  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:54.107606  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:54.138057  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:54.138078  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:54.204366  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:54.204394  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:54.218513  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:54.218535  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:54.279164  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:54.271489   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.272091   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.273620   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.274071   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.275622   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:54.271489   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.272091   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.273620   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.274071   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.275622   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:56.780360  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:56.791915  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:56.791969  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:56.817452  656123 cri.go:89] found id: ""
	I1006 14:31:56.817470  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.817477  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:56.817483  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:56.817529  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:56.842632  656123 cri.go:89] found id: ""
	I1006 14:31:56.842646  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.842653  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:56.842657  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:56.842700  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:56.870346  656123 cri.go:89] found id: ""
	I1006 14:31:56.870361  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.870368  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:56.870373  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:56.870421  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:56.898085  656123 cri.go:89] found id: ""
	I1006 14:31:56.898102  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.898107  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:56.898112  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:56.898172  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:56.925826  656123 cri.go:89] found id: ""
	I1006 14:31:56.925842  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.925849  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:56.925854  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:56.925917  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:56.952736  656123 cri.go:89] found id: ""
	I1006 14:31:56.952753  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.952759  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:56.952764  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:56.952817  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:56.981505  656123 cri.go:89] found id: ""
	I1006 14:31:56.981524  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.981534  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:56.981544  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:56.981558  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:57.038974  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:57.031730   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.032302   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.033897   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.034349   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.035558   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:57.031730   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.032302   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.033897   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.034349   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.035558   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:57.038998  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:57.039009  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:57.104175  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:57.104199  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:57.133096  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:57.133118  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:57.198894  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:57.198924  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:59.714028  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:59.725916  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:59.725972  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:59.751782  656123 cri.go:89] found id: ""
	I1006 14:31:59.751801  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.751810  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:59.751816  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:59.751864  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:59.776851  656123 cri.go:89] found id: ""
	I1006 14:31:59.776867  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.776874  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:59.776878  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:59.776924  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:59.800431  656123 cri.go:89] found id: ""
	I1006 14:31:59.800447  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.800455  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:59.800467  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:59.800530  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:59.825387  656123 cri.go:89] found id: ""
	I1006 14:31:59.825404  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.825412  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:59.825423  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:59.825468  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:59.849169  656123 cri.go:89] found id: ""
	I1006 14:31:59.849186  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.849195  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:59.849232  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:59.849291  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:59.874820  656123 cri.go:89] found id: ""
	I1006 14:31:59.874835  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.874841  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:59.874846  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:59.874893  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:59.900818  656123 cri.go:89] found id: ""
	I1006 14:31:59.900834  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.900840  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:59.900848  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:59.900860  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:59.957989  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:59.950533   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.951047   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.952664   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.953012   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.954540   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:59.950533   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.951047   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.952664   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.953012   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.954540   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:59.958004  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:59.958025  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:00.016244  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:00.016287  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:00.047330  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:00.047346  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:00.111078  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:00.111104  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:02.626253  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:02.637551  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:02.637606  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:02.665023  656123 cri.go:89] found id: ""
	I1006 14:32:02.665040  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.665050  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:02.665056  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:02.665118  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:02.692374  656123 cri.go:89] found id: ""
	I1006 14:32:02.692397  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.692404  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:02.692409  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:02.692458  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:02.719922  656123 cri.go:89] found id: ""
	I1006 14:32:02.719942  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.719953  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:02.719959  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:02.720014  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:02.746934  656123 cri.go:89] found id: ""
	I1006 14:32:02.746950  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.746956  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:02.746962  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:02.747009  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:02.774417  656123 cri.go:89] found id: ""
	I1006 14:32:02.774435  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.774442  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:02.774447  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:02.774496  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:02.801761  656123 cri.go:89] found id: ""
	I1006 14:32:02.801776  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.801783  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:02.801788  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:02.801844  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:02.828981  656123 cri.go:89] found id: ""
	I1006 14:32:02.828998  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.829005  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:02.829014  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:02.829028  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:02.895754  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:02.895778  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:02.909930  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:02.909950  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:02.968533  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:02.961042   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.961577   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963104   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963565   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.965085   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:02.961042   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.961577   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963104   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963565   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.965085   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:02.968546  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:02.968560  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:03.033943  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:03.033967  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:05.566153  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:05.577534  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:05.577601  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:05.604282  656123 cri.go:89] found id: ""
	I1006 14:32:05.604301  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.604311  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:05.604317  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:05.604375  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:05.631089  656123 cri.go:89] found id: ""
	I1006 14:32:05.631105  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.631112  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:05.631116  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:05.631172  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:05.658464  656123 cri.go:89] found id: ""
	I1006 14:32:05.658484  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.658495  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:05.658501  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:05.658559  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:05.685951  656123 cri.go:89] found id: ""
	I1006 14:32:05.685971  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.685980  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:05.685987  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:05.686043  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:05.712003  656123 cri.go:89] found id: ""
	I1006 14:32:05.712020  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.712028  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:05.712033  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:05.712093  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:05.740632  656123 cri.go:89] found id: ""
	I1006 14:32:05.740652  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.740660  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:05.740667  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:05.740728  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:05.766042  656123 cri.go:89] found id: ""
	I1006 14:32:05.766064  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.766072  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:05.766080  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:05.766092  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:05.837102  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:05.837132  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:05.851014  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:05.851038  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:05.910902  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:05.903038   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.903650   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905294   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905834   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.907440   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:05.903038   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.903650   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905294   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905834   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.907440   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:05.910914  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:05.910927  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:05.975171  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:05.975197  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:08.507407  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:08.518312  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:08.518362  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:08.544556  656123 cri.go:89] found id: ""
	I1006 14:32:08.544575  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.544585  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:08.544591  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:08.544646  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:08.569832  656123 cri.go:89] found id: ""
	I1006 14:32:08.569849  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.569858  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:08.569863  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:08.569911  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:08.595352  656123 cri.go:89] found id: ""
	I1006 14:32:08.595368  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.595377  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:08.595383  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:08.595447  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:08.621980  656123 cri.go:89] found id: ""
	I1006 14:32:08.621995  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.622001  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:08.622006  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:08.622062  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:08.648436  656123 cri.go:89] found id: ""
	I1006 14:32:08.648451  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.648458  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:08.648462  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:08.648519  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:08.673561  656123 cri.go:89] found id: ""
	I1006 14:32:08.673579  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.673589  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:08.673595  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:08.673657  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:08.699829  656123 cri.go:89] found id: ""
	I1006 14:32:08.699847  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.699855  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:08.699866  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:08.699884  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:08.712951  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:08.712972  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:08.769035  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:08.761477   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.762001   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.763631   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.764099   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.765640   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:08.761477   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.762001   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.763631   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.764099   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.765640   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:08.769047  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:08.769063  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:08.832511  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:08.832534  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:08.861346  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:08.861364  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:11.430582  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:11.441872  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:11.441923  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:11.467567  656123 cri.go:89] found id: ""
	I1006 14:32:11.467586  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.467596  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:11.467603  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:11.467660  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:11.494656  656123 cri.go:89] found id: ""
	I1006 14:32:11.494683  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.494690  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:11.494695  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:11.494743  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:11.521748  656123 cri.go:89] found id: ""
	I1006 14:32:11.521763  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.521770  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:11.521775  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:11.521820  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:11.548602  656123 cri.go:89] found id: ""
	I1006 14:32:11.548620  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.548626  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:11.548632  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:11.548691  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:11.576572  656123 cri.go:89] found id: ""
	I1006 14:32:11.576588  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.576595  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:11.576600  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:11.576651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:11.603326  656123 cri.go:89] found id: ""
	I1006 14:32:11.603346  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.603355  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:11.603360  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:11.603415  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:11.629710  656123 cri.go:89] found id: ""
	I1006 14:32:11.629728  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.629738  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:11.629747  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:11.629757  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:11.700650  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:11.700726  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:11.714603  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:11.714630  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:11.772602  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:11.764966   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.765455   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767171   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767660   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.769186   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:11.764966   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.765455   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767171   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767660   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.769186   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:11.772614  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:11.772626  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:11.833230  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:11.833254  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:14.365875  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:14.376698  656123 kubeadm.go:601] duration metric: took 4m4.218544485s to restartPrimaryControlPlane
	W1006 14:32:14.376820  656123 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1006 14:32:14.376904  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:32:14.835776  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:32:14.848804  656123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:32:14.857253  656123 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:32:14.857310  656123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:32:14.864786  656123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:32:14.864795  656123 kubeadm.go:157] found existing configuration files:
	
	I1006 14:32:14.864835  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:32:14.872239  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:32:14.872285  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:32:14.879414  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:32:14.886697  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:32:14.886741  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:32:14.893638  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:32:14.900861  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:32:14.900895  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:32:14.907789  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:32:14.914902  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:32:14.914933  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:32:14.921800  656123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:32:14.978601  656123 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:32:15.038549  656123 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:36:17.406896  656123 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:36:17.407019  656123 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:36:17.410627  656123 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:36:17.410683  656123 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:36:17.410779  656123 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:36:17.410840  656123 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:36:17.410869  656123 kubeadm.go:318] OS: Linux
	I1006 14:36:17.410914  656123 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:36:17.410949  656123 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:36:17.411007  656123 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:36:17.411060  656123 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:36:17.411098  656123 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:36:17.411140  656123 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:36:17.411189  656123 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:36:17.411245  656123 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:36:17.411317  656123 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:36:17.411401  656123 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:36:17.411485  656123 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:36:17.411556  656123 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:36:17.413722  656123 out.go:252]   - Generating certificates and keys ...
	I1006 14:36:17.413795  656123 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:36:17.413884  656123 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:36:17.413987  656123 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:36:17.414057  656123 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:36:17.414137  656123 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:36:17.414181  656123 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:36:17.414260  656123 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:36:17.414334  656123 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:36:17.414439  656123 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:36:17.414518  656123 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:36:17.414578  656123 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:36:17.414662  656123 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:36:17.414728  656123 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:36:17.414803  656123 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:36:17.414845  656123 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:36:17.414916  656123 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:36:17.414967  656123 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:36:17.415028  656123 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:36:17.415104  656123 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:36:17.416892  656123 out.go:252]   - Booting up control plane ...
	I1006 14:36:17.416963  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:36:17.417045  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:36:17.417099  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:36:17.417195  656123 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:36:17.417298  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:36:17.417388  656123 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:36:17.417462  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:36:17.417493  656123 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:36:17.417595  656123 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:36:17.417679  656123 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:36:17.417755  656123 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.528699ms
	I1006 14:36:17.417834  656123 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:36:17.417918  656123 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1006 14:36:17.418000  656123 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:36:17.418064  656123 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:36:17.418126  656123 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000416419s
	I1006 14:36:17.418196  656123 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000737625s
	I1006 14:36:17.418279  656123 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00070414s
	I1006 14:36:17.418282  656123 kubeadm.go:318] 
	I1006 14:36:17.418350  656123 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:36:17.418415  656123 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:36:17.418514  656123 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:36:17.418595  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:36:17.418668  656123 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:36:17.418749  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:36:17.418809  656123 kubeadm.go:318] 
	W1006 14:36:17.418920  656123 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.528699ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000416419s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000737625s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00070414s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:36:17.419037  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:36:17.865331  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:36:17.878364  656123 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:36:17.878407  656123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:36:17.886488  656123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:36:17.886495  656123 kubeadm.go:157] found existing configuration files:
	
	I1006 14:36:17.886535  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:36:17.894142  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:36:17.894180  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:36:17.901791  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:36:17.909427  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:36:17.909474  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:36:17.916720  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:36:17.924474  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:36:17.924517  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:36:17.931765  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:36:17.939342  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:36:17.939397  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:36:17.947232  656123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:36:17.986103  656123 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:36:17.986155  656123 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:36:18.005746  656123 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:36:18.005847  656123 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:36:18.005884  656123 kubeadm.go:318] OS: Linux
	I1006 14:36:18.005928  656123 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:36:18.005966  656123 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:36:18.006009  656123 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:36:18.006047  656123 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:36:18.006115  656123 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:36:18.006229  656123 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:36:18.006274  656123 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:36:18.006314  656123 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:36:18.063701  656123 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:36:18.063828  656123 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:36:18.063979  656123 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:36:18.070276  656123 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:36:18.073073  656123 out.go:252]   - Generating certificates and keys ...
	I1006 14:36:18.073146  656123 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:36:18.073230  656123 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:36:18.073310  656123 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:36:18.073360  656123 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:36:18.073469  656123 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:36:18.073537  656123 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:36:18.073593  656123 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:36:18.073643  656123 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:36:18.073731  656123 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:36:18.073828  656123 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:36:18.073881  656123 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:36:18.073950  656123 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:36:18.358369  656123 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:36:18.660416  656123 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:36:18.904822  656123 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:36:19.181972  656123 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:36:19.419333  656123 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:36:19.419883  656123 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:36:19.422018  656123 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:36:19.424552  656123 out.go:252]   - Booting up control plane ...
	I1006 14:36:19.424633  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:36:19.424695  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:36:19.424766  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:36:19.438773  656123 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:36:19.438935  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:36:19.446167  656123 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:36:19.446370  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:36:19.446407  656123 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:36:19.549636  656123 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:36:19.549773  656123 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:36:21.051643  656123 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501975645s
	I1006 14:36:21.055540  656123 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:36:21.055642  656123 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1006 14:36:21.055761  656123 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:36:21.055838  656123 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:40:21.055953  656123 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	I1006 14:40:21.056046  656123 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	I1006 14:40:21.056101  656123 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	I1006 14:40:21.056104  656123 kubeadm.go:318] 
	I1006 14:40:21.056173  656123 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:40:21.056304  656123 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:40:21.056432  656123 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:40:21.056532  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:40:21.056641  656123 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:40:21.056764  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:40:21.056770  656123 kubeadm.go:318] 
	I1006 14:40:21.060023  656123 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:40:21.060145  656123 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:40:21.060722  656123 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	I1006 14:40:21.060819  656123 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:40:21.060909  656123 kubeadm.go:402] duration metric: took 12m10.94114452s to StartCluster
	I1006 14:40:21.060976  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:40:21.061036  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:40:21.089107  656123 cri.go:89] found id: ""
	I1006 14:40:21.089130  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.089137  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:40:21.089143  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:40:21.089218  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:40:21.116923  656123 cri.go:89] found id: ""
	I1006 14:40:21.116942  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.116948  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:40:21.116954  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:40:21.117001  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:40:21.144161  656123 cri.go:89] found id: ""
	I1006 14:40:21.144196  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.144219  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:40:21.144227  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:40:21.144287  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:40:21.173031  656123 cri.go:89] found id: ""
	I1006 14:40:21.173051  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.173059  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:40:21.173065  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:40:21.173117  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:40:21.200194  656123 cri.go:89] found id: ""
	I1006 14:40:21.200232  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.200242  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:40:21.200249  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:40:21.200313  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:40:21.227692  656123 cri.go:89] found id: ""
	I1006 14:40:21.227708  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.227715  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:40:21.227720  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:40:21.227777  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:40:21.255803  656123 cri.go:89] found id: ""
	I1006 14:40:21.255827  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.255836  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:40:21.255848  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:40:21.255863  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:40:21.269683  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:40:21.269708  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:40:21.330259  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:40:21.322987   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.323612   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.324719   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.325098   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.326635   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:40:21.322987   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.323612   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.324719   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.325098   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.326635   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:40:21.330282  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:40:21.330295  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:40:21.395010  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:40:21.395036  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:40:21.425956  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:40:21.425975  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1006 14:40:21.494244  656123 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:40:21.494316  656123 out.go:285] * 
	W1006 14:40:21.494402  656123 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:40:21.494415  656123 out.go:285] * 
	W1006 14:40:21.496145  656123 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:40:21.499891  656123 out.go:203] 
	W1006 14:40:21.500973  656123 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:40:21.500999  656123 out.go:285] * 
	I1006 14:40:21.502231  656123 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:40:32 functional-135520 crio[5849]: time="2025-10-06T14:40:32.89139927Z" level=info msg="Checking image status: kicbase/echo-server:functional-135520" id=2fdc8ae0-74cb-4379-b2fd-000512a96d7e name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:32 functional-135520 crio[5849]: time="2025-10-06T14:40:32.918994026Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-135520" id=ab3fb7f0-e5b4-49fd-8d10-f1f8acba9f64 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:32 functional-135520 crio[5849]: time="2025-10-06T14:40:32.919111647Z" level=info msg="Image docker.io/kicbase/echo-server:functional-135520 not found" id=ab3fb7f0-e5b4-49fd-8d10-f1f8acba9f64 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:32 functional-135520 crio[5849]: time="2025-10-06T14:40:32.919146294Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-135520 found" id=ab3fb7f0-e5b4-49fd-8d10-f1f8acba9f64 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:32 functional-135520 crio[5849]: time="2025-10-06T14:40:32.946070229Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-135520" id=f1b76fb8-2601-4329-a83b-036d044b53a4 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:32 functional-135520 crio[5849]: time="2025-10-06T14:40:32.94625581Z" level=info msg="Image localhost/kicbase/echo-server:functional-135520 not found" id=f1b76fb8-2601-4329-a83b-036d044b53a4 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:32 functional-135520 crio[5849]: time="2025-10-06T14:40:32.946327676Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-135520 found" id=f1b76fb8-2601-4329-a83b-036d044b53a4 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.736074966Z" level=info msg="Checking image status: kicbase/echo-server:functional-135520" id=b0d58989-d35a-49df-b66f-73123c87264c name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.766254225Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-135520" id=33b10420-71fc-4bcf-b97c-005c11159859 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.76639425Z" level=info msg="Image docker.io/kicbase/echo-server:functional-135520 not found" id=33b10420-71fc-4bcf-b97c-005c11159859 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.76642926Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-135520 found" id=33b10420-71fc-4bcf-b97c-005c11159859 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.798335064Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-135520" id=c07c47b5-f123-4df6-aac0-718c9481559f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.798458706Z" level=info msg="Image localhost/kicbase/echo-server:functional-135520 not found" id=c07c47b5-f123-4df6-aac0-718c9481559f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.798490196Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-135520 found" id=c07c47b5-f123-4df6-aac0-718c9481559f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.980963669Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=17b6706e-b500-4524-871f-23df38e70571 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.981925826Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=94f4b8be-c003-4976-9cb9-8a805158b29d name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.982820585Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-135520/kube-scheduler" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.983106395Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.987700403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.988175946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.003670737Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.005132701Z" level=info msg="createCtr: deleting container ID aa3a2f6476915d7b5d9b1bd05a3095d22efa7de7f25df14d6830c1b4bad20c39 from idIndex" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.005171158Z" level=info msg="createCtr: removing container aa3a2f6476915d7b5d9b1bd05a3095d22efa7de7f25df14d6830c1b4bad20c39" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.005225713Z" level=info msg="createCtr: deleting container aa3a2f6476915d7b5d9b1bd05a3095d22efa7de7f25df14d6830c1b4bad20c39 from storage" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.007324024Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-135520_kube-system_5115bd1eba9594a3f2b99b5d6a4b9d59_0" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:40:36.114553   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:36.115632   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:36.116120   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:36.117671   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:36.118111   17325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:40:36 up  5:22,  0 user,  load average: 1.18, 0.28, 0.32
	Linux functional-135520 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:40:29 functional-135520 kubelet[14966]:  > podSandboxID="0bf6050e948f47f363040ce421949b89bef2d06623cc9fef382c27f04872ce86"
	Oct 06 14:40:29 functional-135520 kubelet[14966]: E1006 14:40:29.023549   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:40:29 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:29 functional-135520 kubelet[14966]:  > podSandboxID="91ab0a64f17ca953284929376780a86381ab6a8cae1f4af7da89790dc4c0e8df"
	Oct 06 14:40:29 functional-135520 kubelet[14966]: E1006 14:40:29.023668   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:29 functional-135520 kubelet[14966]:         container kube-apiserver start failed in pod kube-apiserver-functional-135520_kube-system(9c0f460a73b4e4a7087ce2a722c4cad4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:29 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:29 functional-135520 kubelet[14966]: E1006 14:40:29.023801   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-135520" podUID="9c0f460a73b4e4a7087ce2a722c4cad4"
	Oct 06 14:40:29 functional-135520 kubelet[14966]: E1006 14:40:29.023746   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:29 functional-135520 kubelet[14966]:         container etcd start failed in pod etcd-functional-135520_kube-system(f24ebbe4b3fc964d32e35d345c0d3653): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:29 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:29 functional-135520 kubelet[14966]: E1006 14:40:29.024948   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-135520" podUID="f24ebbe4b3fc964d32e35d345c0d3653"
	Oct 06 14:40:30 functional-135520 kubelet[14966]: E1006 14:40:30.994095   14966 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-135520\" not found"
	Oct 06 14:40:31 functional-135520 kubelet[14966]: E1006 14:40:31.602306   14966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-135520?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 06 14:40:31 functional-135520 kubelet[14966]: I1006 14:40:31.764420   14966 kubelet_node_status.go:75] "Attempting to register node" node="functional-135520"
	Oct 06 14:40:31 functional-135520 kubelet[14966]: E1006 14:40:31.764871   14966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-135520"
	Oct 06 14:40:33 functional-135520 kubelet[14966]: E1006 14:40:33.980503   14966 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:40:34 functional-135520 kubelet[14966]: E1006 14:40:34.007644   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:40:34 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:34 functional-135520 kubelet[14966]:  > podSandboxID="526b997044ad8cc54e45aef5a5faa2edaadc9cabbedd2784eaded2bd6562135f"
	Oct 06 14:40:34 functional-135520 kubelet[14966]: E1006 14:40:34.007745   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:34 functional-135520 kubelet[14966]:         container kube-scheduler start failed in pod kube-scheduler-functional-135520_kube-system(5115bd1eba9594a3f2b99b5d6a4b9d59): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:34 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:34 functional-135520 kubelet[14966]: E1006 14:40:34.007777   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-135520" podUID="5115bd1eba9594a3f2b99b5d6a4b9d59"
	Oct 06 14:40:36 functional-135520 kubelet[14966]: E1006 14:40:36.021610   14966 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-135520.186beda7023a08f5  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-135520,UID:functional-135520,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-135520 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-135520,},FirstTimestamp:2025-10-06 14:36:20.970989813 +0000 UTC m=+1.419813170,LastTimestamp:2025-10-06 14:36:20.970989813 +0000 UTC m=+1.419813170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-135520,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520: exit status 2 (331.851092ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-135520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (241.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1006 14:40:52.265495  629719 retry.go:31] will retry after 9.178619153s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1006 14:41:01.444778  629719 retry.go:31] will retry after 12.087222788s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1006 14:41:13.532347  629719 retry.go:31] will retry after 49.249754288s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test_pvc_test.go:50: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520: exit status 2 (307.486013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-135520" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-135520
helpers_test.go:243: (dbg) docker inspect functional-135520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	        "Created": "2025-10-06T14:13:32.283355011Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:13:32.318096257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hosts",
	        "LogPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20-json.log",
	        "Name": "/functional-135520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-135520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-135520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	                "LowerDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-135520",
	                "Source": "/var/lib/docker/volumes/functional-135520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-135520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-135520",
	                "name.minikube.sigs.k8s.io": "functional-135520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6368ffca3e5840f94a34614c511d9f0a0a4ca0d05de4fe1f94c8bfdc332f1a62",
	            "SandboxKey": "/var/run/docker/netns/6368ffca3e58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-135520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d1:94:25:38:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f712be59dd18dac98bed5f234c9f77a39e85277143d6f46285adcd3b0185d552",
	                    "EndpointID": "b816964b653b1b5116e3262dfdc87af272931013ef5b9e2714c9ff7357118a6f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-135520",
	                        "3dd9a226ea42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520: exit status 2 (303.018856ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 logs -n 25
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount1 --alsologtostderr -v=1 │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ mount          │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount2 --alsologtostderr -v=1 │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ ssh            │ functional-135520 ssh findmnt -T /mount1                                                                           │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh            │ functional-135520 ssh findmnt -T /mount2                                                                           │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh            │ functional-135520 ssh findmnt -T /mount3                                                                           │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ mount          │ -p functional-135520 --kill=true                                                                                   │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service        │ functional-135520 service list                                                                                     │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service        │ functional-135520 service list -o json                                                                             │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service        │ functional-135520 service --namespace=default --https --url hello-node                                             │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service        │ functional-135520 service hello-node --url --format={{.IP}}                                                        │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service        │ functional-135520 service hello-node --url                                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ start          │ -p functional-135520 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ start          │ -p functional-135520 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ start          │ -p functional-135520 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-135520 --alsologtostderr -v=1                                                     │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ update-context │ functional-135520 update-context --alsologtostderr -v=2                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ update-context │ functional-135520 update-context --alsologtostderr -v=2                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ update-context │ functional-135520 update-context --alsologtostderr -v=2                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image          │ functional-135520 image ls --format short --alsologtostderr                                                        │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image          │ functional-135520 image ls --format json --alsologtostderr                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image          │ functional-135520 image ls --format table --alsologtostderr                                                        │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image          │ functional-135520 image ls --format yaml --alsologtostderr                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh            │ functional-135520 ssh pgrep buildkitd                                                                              │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image          │ functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr             │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image          │ functional-135520 image ls                                                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:40:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:40:40.232397  678375 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:40:40.232695  678375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:40:40.232706  678375 out.go:374] Setting ErrFile to fd 2...
	I1006 14:40:40.232710  678375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:40:40.232913  678375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:40:40.233416  678375 out.go:368] Setting JSON to false
	I1006 14:40:40.234527  678375 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19376,"bootTime":1759742264,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:40:40.234623  678375 start.go:140] virtualization: kvm guest
	I1006 14:40:40.236341  678375 out.go:179] * [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:40:40.237443  678375 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:40:40.237480  678375 notify.go:220] Checking for updates...
	I1006 14:40:40.239720  678375 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:40:40.240829  678375 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:40:40.241859  678375 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:40:40.242876  678375 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:40:40.243805  678375 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:40:40.245219  678375 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:40:40.245691  678375 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:40:40.271708  678375 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:40:40.271845  678375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:40:40.332594  678375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:40:40.321774938 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:40:40.332758  678375 docker.go:318] overlay module found
	I1006 14:40:40.333962  678375 out.go:179] * Using the docker driver based on existing profile
	I1006 14:40:40.335324  678375 start.go:304] selected driver: docker
	I1006 14:40:40.335338  678375 start.go:924] validating driver "docker" against &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:40:40.335418  678375 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:40:40.335503  678375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:40:40.404152  678375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:40:40.39324905 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:40:40.405093  678375 cni.go:84] Creating CNI manager for ""
	I1006 14:40:40.405186  678375 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:40:40.405273  678375 start.go:348] cluster config:
	{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:40:40.407149  678375 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 06 14:44:24 functional-135520 crio[5849]: time="2025-10-06T14:44:24.000491931Z" level=info msg="createCtr: removing container 0709efead3184e98b3b5cdf3e81b51c34711bbcf72a1f475ae939c86fa523918" id=c99769f4-fd6e-494a-a813-f0c0840b2839 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:44:24 functional-135520 crio[5849]: time="2025-10-06T14:44:24.000520464Z" level=info msg="createCtr: deleting container 0709efead3184e98b3b5cdf3e81b51c34711bbcf72a1f475ae939c86fa523918 from storage" id=c99769f4-fd6e-494a-a813-f0c0840b2839 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:44:24 functional-135520 crio[5849]: time="2025-10-06T14:44:24.002677461Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-135520_kube-system_5115bd1eba9594a3f2b99b5d6a4b9d59_0" id=c99769f4-fd6e-494a-a813-f0c0840b2839 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:44:25 functional-135520 crio[5849]: time="2025-10-06T14:44:25.980553718Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=84ca6a29-99e0-42f0-8e39-6bce3d58b4c6 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:44:25 functional-135520 crio[5849]: time="2025-10-06T14:44:25.982550046Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=1394da2e-0a15-404c-82ab-9a4570d79370 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:44:25 functional-135520 crio[5849]: time="2025-10-06T14:44:25.983657083Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-135520/kube-apiserver" id=9cab203c-0469-458e-b752-a4b8f8d23d32 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:44:25 functional-135520 crio[5849]: time="2025-10-06T14:44:25.983914167Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:44:25 functional-135520 crio[5849]: time="2025-10-06T14:44:25.987248591Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:44:25 functional-135520 crio[5849]: time="2025-10-06T14:44:25.98770783Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:44:26 functional-135520 crio[5849]: time="2025-10-06T14:44:26.006263416Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9cab203c-0469-458e-b752-a4b8f8d23d32 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:44:26 functional-135520 crio[5849]: time="2025-10-06T14:44:26.007815269Z" level=info msg="createCtr: deleting container ID 51bb13d13b5ad8cb8a85db1595e2ef29e491fbadc8e8a59c507e761a92686902 from idIndex" id=9cab203c-0469-458e-b752-a4b8f8d23d32 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:44:26 functional-135520 crio[5849]: time="2025-10-06T14:44:26.007862465Z" level=info msg="createCtr: removing container 51bb13d13b5ad8cb8a85db1595e2ef29e491fbadc8e8a59c507e761a92686902" id=9cab203c-0469-458e-b752-a4b8f8d23d32 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:44:26 functional-135520 crio[5849]: time="2025-10-06T14:44:26.007899082Z" level=info msg="createCtr: deleting container 51bb13d13b5ad8cb8a85db1595e2ef29e491fbadc8e8a59c507e761a92686902 from storage" id=9cab203c-0469-458e-b752-a4b8f8d23d32 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:44:26 functional-135520 crio[5849]: time="2025-10-06T14:44:26.010003386Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-135520_kube-system_9c0f460a73b4e4a7087ce2a722c4cad4_0" id=9cab203c-0469-458e-b752-a4b8f8d23d32 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:44:29 functional-135520 crio[5849]: time="2025-10-06T14:44:29.98068266Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1c865cf0-7c5c-4deb-9b31-b4f15f4b0e2f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:44:29 functional-135520 crio[5849]: time="2025-10-06T14:44:29.981721234Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d89ee47d-03a7-49b3-9c66-819390e421ae name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:44:29 functional-135520 crio[5849]: time="2025-10-06T14:44:29.982678128Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-135520/kube-controller-manager" id=8cbb0325-6c25-46db-856e-1c4590ee02fe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:44:29 functional-135520 crio[5849]: time="2025-10-06T14:44:29.982906284Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:44:29 functional-135520 crio[5849]: time="2025-10-06T14:44:29.986090027Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:44:29 functional-135520 crio[5849]: time="2025-10-06T14:44:29.986547541Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:44:30 functional-135520 crio[5849]: time="2025-10-06T14:44:30.002059884Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8cbb0325-6c25-46db-856e-1c4590ee02fe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:44:30 functional-135520 crio[5849]: time="2025-10-06T14:44:30.003417309Z" level=info msg="createCtr: deleting container ID 37d7c3965d155f955efcaec117751ff695350cffdde0938013692a87b8bf8ae7 from idIndex" id=8cbb0325-6c25-46db-856e-1c4590ee02fe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:44:30 functional-135520 crio[5849]: time="2025-10-06T14:44:30.003455574Z" level=info msg="createCtr: removing container 37d7c3965d155f955efcaec117751ff695350cffdde0938013692a87b8bf8ae7" id=8cbb0325-6c25-46db-856e-1c4590ee02fe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:44:30 functional-135520 crio[5849]: time="2025-10-06T14:44:30.003487226Z" level=info msg="createCtr: deleting container 37d7c3965d155f955efcaec117751ff695350cffdde0938013692a87b8bf8ae7 from storage" id=8cbb0325-6c25-46db-856e-1c4590ee02fe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:44:30 functional-135520 crio[5849]: time="2025-10-06T14:44:30.005504499Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-135520_kube-system_09d686e340c6809af92c3f18dc65ef21_0" id=8cbb0325-6c25-46db-856e-1c4590ee02fe name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:44:31.753115   19199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:44:31.753722   19199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:44:31.755307   19199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:44:31.755724   19199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:44:31.757269   19199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:44:31 up  5:26,  0 user,  load average: 0.02, 0.13, 0.24
	Linux functional-135520 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:44:24 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:44:24 functional-135520 kubelet[14966]: E1006 14:44:24.003070   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-135520" podUID="5115bd1eba9594a3f2b99b5d6a4b9d59"
	Oct 06 14:44:25 functional-135520 kubelet[14966]: E1006 14:44:25.233508   14966 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-135520&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 06 14:44:25 functional-135520 kubelet[14966]: E1006 14:44:25.980028   14966 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:44:26 functional-135520 kubelet[14966]: E1006 14:44:26.010366   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:44:26 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:44:26 functional-135520 kubelet[14966]:  > podSandboxID="0bf6050e948f47f363040ce421949b89bef2d06623cc9fef382c27f04872ce86"
	Oct 06 14:44:26 functional-135520 kubelet[14966]: E1006 14:44:26.010491   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:44:26 functional-135520 kubelet[14966]:         container kube-apiserver start failed in pod kube-apiserver-functional-135520_kube-system(9c0f460a73b4e4a7087ce2a722c4cad4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:44:26 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:44:26 functional-135520 kubelet[14966]: E1006 14:44:26.010531   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-135520" podUID="9c0f460a73b4e4a7087ce2a722c4cad4"
	Oct 06 14:44:27 functional-135520 kubelet[14966]: E1006 14:44:27.185499   14966 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-135520.186beda70239e997\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-135520.186beda70239e997  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-135520,UID:functional-135520,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-135520 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-135520,},FirstTimestamp:2025-10-06 14:36:20.970981783 +0000 UTC m=+1.419805161,LastTimestamp:2025-10-06 14:36:20.972567199 +0000 UTC m=+1.421390567,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Repo
rtingInstance:functional-135520,}"
	Oct 06 14:44:29 functional-135520 kubelet[14966]: E1006 14:44:29.589582   14966 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 06 14:44:29 functional-135520 kubelet[14966]: E1006 14:44:29.640128   14966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-135520?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 06 14:44:29 functional-135520 kubelet[14966]: I1006 14:44:29.839421   14966 kubelet_node_status.go:75] "Attempting to register node" node="functional-135520"
	Oct 06 14:44:29 functional-135520 kubelet[14966]: E1006 14:44:29.839835   14966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-135520"
	Oct 06 14:44:29 functional-135520 kubelet[14966]: E1006 14:44:29.980173   14966 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:44:30 functional-135520 kubelet[14966]: E1006 14:44:30.005807   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:44:30 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:44:30 functional-135520 kubelet[14966]:  > podSandboxID="e06459a5221479b8f8ca8a805df180001ae8c03ad8ebddffca24e6ba8a2614e8"
	Oct 06 14:44:30 functional-135520 kubelet[14966]: E1006 14:44:30.005925   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:44:30 functional-135520 kubelet[14966]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-135520_kube-system(09d686e340c6809af92c3f18dc65ef21): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:44:30 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:44:30 functional-135520 kubelet[14966]: E1006 14:44:30.005967   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-135520" podUID="09d686e340c6809af92c3f18dc65ef21"
	Oct 06 14:44:31 functional-135520 kubelet[14966]: E1006 14:44:31.010999   14966 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-135520\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520: exit status 2 (297.358608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-135520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (241.57s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-135520 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) Non-zero exit: kubectl --context functional-135520 replace --force -f testdata/mysql.yaml: exit status 1 (53.655314ms)

                                                
                                                
** stderr ** 
	E1006 14:40:28.252286  671191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:28.252862  671191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1800: failed to kubectl replace mysql: args "kubectl --context functional-135520 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-135520
helpers_test.go:243: (dbg) docker inspect functional-135520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	        "Created": "2025-10-06T14:13:32.283355011Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:13:32.318096257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hosts",
	        "LogPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20-json.log",
	        "Name": "/functional-135520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-135520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-135520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	                "LowerDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-135520",
	                "Source": "/var/lib/docker/volumes/functional-135520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-135520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-135520",
	                "name.minikube.sigs.k8s.io": "functional-135520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6368ffca3e5840f94a34614c511d9f0a0a4ca0d05de4fe1f94c8bfdc332f1a62",
	            "SandboxKey": "/var/run/docker/netns/6368ffca3e58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-135520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d1:94:25:38:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f712be59dd18dac98bed5f234c9f77a39e85277143d6f46285adcd3b0185d552",
	                    "EndpointID": "b816964b653b1b5116e3262dfdc87af272931013ef5b9e2714c9ff7357118a6f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-135520",
	                        "3dd9a226ea42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520: exit status 2 (351.142254ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-135520 logs -n 25: (1.062415895s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable dashboard -p addons-834039                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-834039     │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-834039                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-834039     │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ start   │ -p addons-834039 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-834039     │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	│ delete  │ -p addons-834039                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-834039     │ jenkins │ v1.37.0 │ 06 Oct 25 14:04 UTC │ 06 Oct 25 14:04 UTC │
	│ start   │ -p nospam-500584 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-500584 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:04 UTC │                     │
	│ start   │ nospam-500584 --log_dir /tmp/nospam-500584 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ start   │ nospam-500584 --log_dir /tmp/nospam-500584 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ start   │ nospam-500584 --log_dir /tmp/nospam-500584 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ pause   │ nospam-500584 --log_dir /tmp/nospam-500584 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ pause   │ nospam-500584 --log_dir /tmp/nospam-500584 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ pause   │ nospam-500584 --log_dir /tmp/nospam-500584 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ unpause │ nospam-500584 --log_dir /tmp/nospam-500584 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ stop    │ nospam-500584 --log_dir /tmp/nospam-500584 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ delete  │ -p nospam-500584                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-500584     │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │ 06 Oct 25 14:13 UTC │
	│ start   │ -p functional-135520 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:13 UTC │                     │
	│ start   │ -p functional-135520 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:21 UTC │                     │
	│ cache   │ functional-135520 cache add registry.k8s.io/pause:3.1                                                                                                                                                                                                                                                                                                                                                                                                                    │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:27 UTC │ 06 Oct 25 14:27 UTC │
	│ ssh     │ functional-135520 ssh -n functional-135520 sudo cat /home/docker/cp-test.txt                                                                                                                                                                                                                                                                                                                                                                                             │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                                                                                                                                                                                                                                                                                                                                 │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ cp      │ functional-135520 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                                                                                                                                                                                                                                                                                                                                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ ssh     │ functional-135520 ssh sudo cat /etc/ssl/certs/6297192.pem                                                                                                                                                                                                                                                                                                                                                                                                                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:28:06
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:28:06.515575  656123 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:28:06.515775  656123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:28:06.515777  656123 out.go:374] Setting ErrFile to fd 2...
	I1006 14:28:06.515780  656123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:28:06.515998  656123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:28:06.516461  656123 out.go:368] Setting JSON to false
	I1006 14:28:06.517416  656123 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18622,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:28:06.517495  656123 start.go:140] virtualization: kvm guest
	I1006 14:28:06.519514  656123 out.go:179] * [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:28:06.520800  656123 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:28:06.520851  656123 notify.go:220] Checking for updates...
	I1006 14:28:06.523025  656123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:28:06.524163  656123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:28:06.525184  656123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:28:06.526184  656123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:28:06.527199  656123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:28:06.528788  656123 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:28:06.528884  656123 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:28:06.553892  656123 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:28:06.554005  656123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:28:06.610913  656123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-06 14:28:06.599957285 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:28:06.611014  656123 docker.go:318] overlay module found
	I1006 14:28:06.612730  656123 out.go:179] * Using the docker driver based on existing profile
	I1006 14:28:06.613792  656123 start.go:304] selected driver: docker
	I1006 14:28:06.613801  656123 start.go:924] validating driver "docker" against &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:28:06.613876  656123 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:28:06.613960  656123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:28:06.672658  656123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-06 14:28:06.663055015 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:28:06.673343  656123 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:28:06.673382  656123 cni.go:84] Creating CNI manager for ""
	I1006 14:28:06.673449  656123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:28:06.673491  656123 start.go:348] cluster config:
	{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:28:06.675542  656123 out.go:179] * Starting "functional-135520" primary control-plane node in "functional-135520" cluster
	I1006 14:28:06.676765  656123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:28:06.678012  656123 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:28:06.679109  656123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:28:06.679148  656123 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:28:06.679171  656123 cache.go:58] Caching tarball of preloaded images
	I1006 14:28:06.679229  656123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:28:06.679315  656123 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:28:06.679322  656123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:28:06.679424  656123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/config.json ...
	I1006 14:28:06.701440  656123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:28:06.701451  656123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:28:06.701470  656123 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:28:06.701500  656123 start.go:360] acquireMachinesLock for functional-135520: {Name:mk634323c4619e77647ac9d9aaca492e399526ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:28:06.701582  656123 start.go:364] duration metric: took 55.883µs to acquireMachinesLock for "functional-135520"
	I1006 14:28:06.701608  656123 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:28:06.701614  656123 fix.go:54] fixHost starting: 
	I1006 14:28:06.701815  656123 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:28:06.719582  656123 fix.go:112] recreateIfNeeded on functional-135520: state=Running err=<nil>
	W1006 14:28:06.719608  656123 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:28:06.721479  656123 out.go:252] * Updating the running docker "functional-135520" container ...
	I1006 14:28:06.721509  656123 machine.go:93] provisionDockerMachine start ...
	I1006 14:28:06.721596  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:06.739776  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:06.740016  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:06.740022  656123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:28:06.883328  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:28:06.883355  656123 ubuntu.go:182] provisioning hostname "functional-135520"
	I1006 14:28:06.883416  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:06.901008  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:06.901274  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:06.901282  656123 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-135520 && echo "functional-135520" | sudo tee /etc/hostname
	I1006 14:28:07.054829  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-135520
	
	I1006 14:28:07.054893  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.073103  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:07.073400  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:07.073412  656123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-135520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-135520/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-135520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:28:07.218044  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:28:07.218064  656123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:28:07.218086  656123 ubuntu.go:190] setting up certificates
	I1006 14:28:07.218097  656123 provision.go:84] configureAuth start
	I1006 14:28:07.218147  656123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:28:07.235320  656123 provision.go:143] copyHostCerts
	I1006 14:28:07.235375  656123 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:28:07.235390  656123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:28:07.235462  656123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:28:07.235557  656123 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:28:07.235561  656123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:28:07.235585  656123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:28:07.235653  656123 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:28:07.235656  656123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:28:07.235685  656123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:28:07.235742  656123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.functional-135520 san=[127.0.0.1 192.168.49.2 functional-135520 localhost minikube]
	I1006 14:28:07.452963  656123 provision.go:177] copyRemoteCerts
	I1006 14:28:07.453021  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:28:07.453058  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.470979  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:07.572166  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:28:07.589268  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 14:28:07.606864  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:28:07.624012  656123 provision.go:87] duration metric: took 405.903097ms to configureAuth
	I1006 14:28:07.624031  656123 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:28:07.624198  656123 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:28:07.624358  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.642129  656123 main.go:141] libmachine: Using SSH client type: native
	I1006 14:28:07.642348  656123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32878 <nil> <nil>}
	I1006 14:28:07.642358  656123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:28:07.930562  656123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:28:07.930579  656123 machine.go:96] duration metric: took 1.209063221s to provisionDockerMachine
	I1006 14:28:07.930589  656123 start.go:293] postStartSetup for "functional-135520" (driver="docker")
	I1006 14:28:07.930598  656123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:28:07.930651  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:28:07.930687  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:07.948006  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.049510  656123 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:28:08.053027  656123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:28:08.053042  656123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:28:08.053061  656123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:28:08.053110  656123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:28:08.053177  656123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:28:08.053267  656123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts -> hosts in /etc/test/nested/copy/629719
	I1006 14:28:08.053298  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/629719
	I1006 14:28:08.060796  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:28:08.077999  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts --> /etc/test/nested/copy/629719/hosts (40 bytes)
	I1006 14:28:08.094766  656123 start.go:296] duration metric: took 164.165544ms for postStartSetup
	I1006 14:28:08.094821  656123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:28:08.094852  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:08.112238  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.210200  656123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:28:08.214744  656123 fix.go:56] duration metric: took 1.513121746s for fixHost
	I1006 14:28:08.214763  656123 start.go:83] releasing machines lock for "functional-135520", held for 1.513172056s
	I1006 14:28:08.214831  656123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-135520
	I1006 14:28:08.231996  656123 ssh_runner.go:195] Run: cat /version.json
	I1006 14:28:08.232006  656123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:28:08.232033  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:08.232059  656123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:28:08.250015  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.250313  656123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:28:08.415268  656123 ssh_runner.go:195] Run: systemctl --version
	I1006 14:28:08.422068  656123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:28:08.458421  656123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:28:08.463104  656123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:28:08.463164  656123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:28:08.471006  656123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:28:08.471018  656123 start.go:495] detecting cgroup driver to use...
	I1006 14:28:08.471045  656123 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:28:08.471088  656123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:28:08.485271  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:28:08.496859  656123 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:28:08.496895  656123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:28:08.510507  656123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:28:08.522301  656123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:28:08.600902  656123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:28:08.681762  656123 docker.go:234] disabling docker service ...
	I1006 14:28:08.681827  656123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:28:08.696663  656123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:28:08.708614  656123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:28:08.788151  656123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:28:08.872163  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:28:08.884753  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:28:08.898897  656123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:28:08.898940  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.907545  656123 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:28:08.907597  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.916027  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.924428  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.932498  656123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:28:08.939984  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.948324  656123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.956705  656123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:28:08.964969  656123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:28:08.971804  656123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:28:08.978693  656123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:28:09.061389  656123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:28:09.170335  656123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:28:09.170401  656123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:28:09.174497  656123 start.go:563] Will wait 60s for crictl version
	I1006 14:28:09.174546  656123 ssh_runner.go:195] Run: which crictl
	I1006 14:28:09.177947  656123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:28:09.201915  656123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:28:09.201972  656123 ssh_runner.go:195] Run: crio --version
	I1006 14:28:09.230589  656123 ssh_runner.go:195] Run: crio --version
	I1006 14:28:09.260606  656123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:28:09.261947  656123 cli_runner.go:164] Run: docker network inspect functional-135520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:28:09.278672  656123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:28:09.284367  656123 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1006 14:28:09.285382  656123 kubeadm.go:883] updating cluster {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:28:09.285546  656123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:28:09.285603  656123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:28:09.318027  656123 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:28:09.318039  656123 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:28:09.318088  656123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:28:09.342904  656123 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:28:09.342917  656123 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:28:09.342923  656123 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1006 14:28:09.343012  656123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-135520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:28:09.343066  656123 ssh_runner.go:195] Run: crio config
	I1006 14:28:09.388889  656123 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1006 14:28:09.388909  656123 cni.go:84] Creating CNI manager for ""
	I1006 14:28:09.388921  656123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:28:09.388932  656123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:28:09.388955  656123 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-135520 NodeName:functional-135520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:28:09.389087  656123 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-135520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:28:09.389140  656123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:28:09.397400  656123 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:28:09.397454  656123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:28:09.404846  656123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 14:28:09.416672  656123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:28:09.428910  656123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1006 14:28:09.440961  656123 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:28:09.444714  656123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:28:09.533656  656123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:28:09.546185  656123 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520 for IP: 192.168.49.2
	I1006 14:28:09.546197  656123 certs.go:195] generating shared ca certs ...
	I1006 14:28:09.546290  656123 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:28:09.546440  656123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:28:09.546475  656123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:28:09.546482  656123 certs.go:257] generating profile certs ...
	I1006 14:28:09.546559  656123 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.key
	I1006 14:28:09.546594  656123 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key.72a46e8e
	I1006 14:28:09.546623  656123 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key
	I1006 14:28:09.546728  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:28:09.546750  656123 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:28:09.546756  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:28:09.546775  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:28:09.546793  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:28:09.546809  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:28:09.546841  656123 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:28:09.547453  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:28:09.564638  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:28:09.581181  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:28:09.597600  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:28:09.614361  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:28:09.630631  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:28:09.647147  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:28:09.663361  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:28:09.679821  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:28:09.696763  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:28:09.713335  656123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:28:09.729791  656123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:28:09.741445  656123 ssh_runner.go:195] Run: openssl version
	I1006 14:28:09.747314  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:28:09.755183  656123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:28:09.758724  656123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:28:09.758757  656123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:28:09.792226  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:28:09.799947  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:28:09.808163  656123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:28:09.811711  656123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:28:09.811747  656123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:28:09.845740  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:28:09.854138  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:28:09.862651  656123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:28:09.866319  656123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:28:09.866364  656123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:28:09.900583  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:28:09.908997  656123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:28:09.912812  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:28:09.946819  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:28:09.981139  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:28:10.015748  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:28:10.049705  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:28:10.084715  656123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:28:10.119782  656123 kubeadm.go:400] StartCluster: {Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:28:10.119890  656123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:28:10.119973  656123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:28:10.149719  656123 cri.go:89] found id: ""
	I1006 14:28:10.149774  656123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:28:10.158129  656123 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:28:10.158143  656123 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:28:10.158217  656123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:28:10.166324  656123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.166847  656123 kubeconfig.go:125] found "functional-135520" server: "https://192.168.49.2:8441"
	I1006 14:28:10.168240  656123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:28:10.175929  656123 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-06 14:13:37.047601698 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-06 14:28:09.438461717 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1006 14:28:10.175939  656123 kubeadm.go:1160] stopping kube-system containers ...
	I1006 14:28:10.175953  656123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1006 14:28:10.175996  656123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:28:10.204289  656123 cri.go:89] found id: ""
	I1006 14:28:10.204358  656123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1006 14:28:10.246949  656123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:28:10.255460  656123 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  6 14:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  6 14:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  6 14:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  6 14:17 /etc/kubernetes/scheduler.conf
	
	I1006 14:28:10.255526  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:28:10.263528  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:28:10.271432  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.271482  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:28:10.278844  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:28:10.286462  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.286516  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:28:10.293960  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:28:10.301358  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:28:10.301414  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:28:10.308882  656123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:28:10.316879  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:10.360770  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.195064  656123 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.834266287s)
	I1006 14:28:12.195115  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.367120  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.417483  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:28:12.470408  656123 api_server.go:52] waiting for apiserver process to appear ...
	I1006 14:28:12.470467  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:12.971496  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:13.471359  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:13.971266  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:14.470628  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:14.970727  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:15.470821  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:15.971537  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:16.470947  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:16.970796  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:17.471324  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:17.970807  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:18.471451  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:18.970803  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:19.471285  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:19.970529  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:20.471499  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:20.971288  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:21.471188  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:21.971466  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:22.471502  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:22.971321  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:23.471284  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:23.970994  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:24.470729  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:24.971445  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:25.470644  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:25.970962  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:26.471442  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:26.971311  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:27.470610  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:27.970961  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:28.470640  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:28.971300  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:29.470626  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:29.971278  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:30.471158  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:30.970980  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:31.470603  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:31.971449  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:32.471177  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:32.970617  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:33.471419  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:33.970722  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:34.471271  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:34.970652  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:35.470921  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:35.971492  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:36.470973  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:36.971256  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:37.471394  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:37.970703  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:38.470961  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:38.970907  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:39.471451  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:39.970850  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:40.471304  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:40.971524  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:41.470744  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:41.971222  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:42.471463  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:42.970604  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:43.470720  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:43.970989  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:44.470818  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:44.970672  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:45.470866  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:45.970683  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:46.471245  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:46.970914  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:47.471423  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:47.971442  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:48.470948  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:48.971501  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:49.471382  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:49.970705  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:50.471271  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:50.971251  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:51.471164  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:51.971336  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:52.471372  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:52.970578  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:53.471263  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:53.971000  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:54.471313  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:54.970838  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:55.470657  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:55.970901  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:56.470732  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:56.971609  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:57.470670  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:57.971054  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:58.470843  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:58.971017  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:59.471644  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:28:59.970666  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:00.471498  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:00.970805  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:01.471435  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:01.970733  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:02.470885  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:02.970839  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:03.470540  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:03.970872  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:04.470727  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:04.970673  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:05.471322  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:05.970626  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:06.470920  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:06.970887  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:07.471415  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:07.970944  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:08.470610  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:08.971309  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:09.470706  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:09.971450  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:10.471425  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:10.971283  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:11.470937  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:11.970687  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:12.471591  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:12.471676  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:12.498988  656123 cri.go:89] found id: ""
	I1006 14:29:12.499014  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.499021  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:12.499026  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:12.499080  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:12.526057  656123 cri.go:89] found id: ""
	I1006 14:29:12.526074  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.526080  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:12.526085  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:12.526164  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:12.553395  656123 cri.go:89] found id: ""
	I1006 14:29:12.553415  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.553426  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:12.553433  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:12.553486  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:12.580815  656123 cri.go:89] found id: ""
	I1006 14:29:12.580836  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.580846  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:12.580870  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:12.580931  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:12.607516  656123 cri.go:89] found id: ""
	I1006 14:29:12.607533  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.607539  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:12.607544  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:12.607607  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:12.634248  656123 cri.go:89] found id: ""
	I1006 14:29:12.634265  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.634272  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:12.634279  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:12.634335  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:12.660860  656123 cri.go:89] found id: ""
	I1006 14:29:12.660876  656123 logs.go:282] 0 containers: []
	W1006 14:29:12.660883  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:12.660893  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:12.660905  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:12.731400  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:12.731425  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:12.745150  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:12.745174  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:12.803068  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:12.795122    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.795709    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797425    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797887    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.799415    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:12.795122    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.795709    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797425    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.797887    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:12.799415    6708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:12.803085  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:12.803098  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:12.870066  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:12.870091  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:15.401709  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:15.412675  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:15.412725  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:15.438239  656123 cri.go:89] found id: ""
	I1006 14:29:15.438255  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.438264  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:15.438270  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:15.438322  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:15.463684  656123 cri.go:89] found id: ""
	I1006 14:29:15.463701  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.463709  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:15.463715  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:15.463769  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:15.488259  656123 cri.go:89] found id: ""
	I1006 14:29:15.488276  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.488284  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:15.488289  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:15.488347  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:15.514676  656123 cri.go:89] found id: ""
	I1006 14:29:15.514692  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.514699  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:15.514704  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:15.514762  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:15.540755  656123 cri.go:89] found id: ""
	I1006 14:29:15.540770  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.540776  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:15.540781  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:15.540832  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:15.565570  656123 cri.go:89] found id: ""
	I1006 14:29:15.565588  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.565598  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:15.565604  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:15.565651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:15.591845  656123 cri.go:89] found id: ""
	I1006 14:29:15.591860  656123 logs.go:282] 0 containers: []
	W1006 14:29:15.591876  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:15.591885  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:15.591895  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:15.605051  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:15.605069  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:15.662500  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:15.655240    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.655743    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657283    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657783    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.659338    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:15.655240    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.655743    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657283    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.657783    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:15.659338    6822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:15.662517  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:15.662531  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:15.727404  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:15.727424  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:15.756261  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:15.756279  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:18.330899  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:18.342312  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:18.342369  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:18.367886  656123 cri.go:89] found id: ""
	I1006 14:29:18.367902  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.367912  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:18.367919  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:18.367967  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:18.394659  656123 cri.go:89] found id: ""
	I1006 14:29:18.394676  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.394685  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:18.394691  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:18.394752  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:18.420739  656123 cri.go:89] found id: ""
	I1006 14:29:18.420762  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.420773  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:18.420780  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:18.420844  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:18.446534  656123 cri.go:89] found id: ""
	I1006 14:29:18.446553  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.446560  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:18.446565  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:18.446610  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:18.474847  656123 cri.go:89] found id: ""
	I1006 14:29:18.474867  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.474876  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:18.474882  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:18.474940  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:18.500739  656123 cri.go:89] found id: ""
	I1006 14:29:18.500755  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.500762  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:18.500767  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:18.500817  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:18.526704  656123 cri.go:89] found id: ""
	I1006 14:29:18.526720  656123 logs.go:282] 0 containers: []
	W1006 14:29:18.526726  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:18.526735  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:18.526749  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:18.594578  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:18.594601  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:18.608090  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:18.608110  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:18.665980  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:18.658366    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.658897    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660516    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660915    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.662586    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:18.658366    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.658897    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660516    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.660915    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:18.662586    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:18.665999  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:18.666015  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:18.726769  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:18.726792  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:21.257561  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:21.269556  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:21.269611  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:21.295967  656123 cri.go:89] found id: ""
	I1006 14:29:21.295989  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.296000  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:21.296007  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:21.296062  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:21.323201  656123 cri.go:89] found id: ""
	I1006 14:29:21.323232  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.323240  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:21.323246  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:21.323297  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:21.352254  656123 cri.go:89] found id: ""
	I1006 14:29:21.352271  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.352277  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:21.352282  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:21.352343  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:21.380457  656123 cri.go:89] found id: ""
	I1006 14:29:21.380477  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.380486  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:21.380493  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:21.380559  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:21.408352  656123 cri.go:89] found id: ""
	I1006 14:29:21.408368  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.408375  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:21.408379  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:21.408435  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:21.434925  656123 cri.go:89] found id: ""
	I1006 14:29:21.434941  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.434948  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:21.434953  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:21.435001  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:21.462533  656123 cri.go:89] found id: ""
	I1006 14:29:21.462551  656123 logs.go:282] 0 containers: []
	W1006 14:29:21.462560  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:21.462570  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:21.462587  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:21.532658  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:21.532682  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:21.547259  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:21.547286  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:21.605779  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:21.598199    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.598802    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600396    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600847    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.602071    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:21.598199    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.598802    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600396    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.600847    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:21.602071    7083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:21.605799  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:21.605816  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:21.670469  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:21.670493  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:24.203350  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:24.214528  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:24.214576  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:24.241149  656123 cri.go:89] found id: ""
	I1006 14:29:24.241173  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.241182  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:24.241187  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:24.241259  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:24.267072  656123 cri.go:89] found id: ""
	I1006 14:29:24.267089  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.267099  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:24.267104  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:24.267157  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:24.292610  656123 cri.go:89] found id: ""
	I1006 14:29:24.292629  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.292639  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:24.292645  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:24.292694  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:24.318386  656123 cri.go:89] found id: ""
	I1006 14:29:24.318403  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.318409  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:24.318414  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:24.318471  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:24.344804  656123 cri.go:89] found id: ""
	I1006 14:29:24.344827  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.344837  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:24.344843  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:24.344893  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:24.372496  656123 cri.go:89] found id: ""
	I1006 14:29:24.372512  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.372518  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:24.372523  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:24.372569  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:24.397473  656123 cri.go:89] found id: ""
	I1006 14:29:24.397489  656123 logs.go:282] 0 containers: []
	W1006 14:29:24.397495  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:24.397503  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:24.397514  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:24.460002  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:24.460024  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:24.492377  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:24.492394  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:24.558943  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:24.558960  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:24.572667  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:24.572685  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:24.631693  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:24.623841    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.624453    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626057    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626493    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.628013    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:24.623841    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.624453    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626057    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.626493    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:24.628013    7216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:27.132387  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:27.143350  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:27.143429  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:27.169854  656123 cri.go:89] found id: ""
	I1006 14:29:27.169869  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.169877  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:27.169882  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:27.169930  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:27.196448  656123 cri.go:89] found id: ""
	I1006 14:29:27.196464  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.196471  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:27.196476  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:27.196522  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:27.223046  656123 cri.go:89] found id: ""
	I1006 14:29:27.223066  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.223075  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:27.223081  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:27.223147  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:27.249726  656123 cri.go:89] found id: ""
	I1006 14:29:27.249744  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.249751  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:27.249756  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:27.249810  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:27.277358  656123 cri.go:89] found id: ""
	I1006 14:29:27.277376  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.277391  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:27.277398  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:27.277468  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:27.303432  656123 cri.go:89] found id: ""
	I1006 14:29:27.303452  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.303461  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:27.303467  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:27.303524  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:27.330642  656123 cri.go:89] found id: ""
	I1006 14:29:27.330660  656123 logs.go:282] 0 containers: []
	W1006 14:29:27.330666  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:27.330677  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:27.330692  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:27.360553  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:27.360570  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:27.428526  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:27.428550  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:27.442696  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:27.442720  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:27.500958  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:27.493064    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.493671    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495253    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495769    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.497273    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:27.493064    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.493671    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495253    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.495769    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:27.497273    7333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:27.500983  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:27.500995  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:30.062974  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:30.074243  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:30.074297  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:30.101939  656123 cri.go:89] found id: ""
	I1006 14:29:30.101960  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.101967  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:30.101973  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:30.102021  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:30.130122  656123 cri.go:89] found id: ""
	I1006 14:29:30.130139  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.130145  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:30.130151  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:30.130229  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:30.157742  656123 cri.go:89] found id: ""
	I1006 14:29:30.157759  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.157767  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:30.157773  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:30.157830  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:30.184613  656123 cri.go:89] found id: ""
	I1006 14:29:30.184634  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.184641  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:30.184646  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:30.184696  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:30.212547  656123 cri.go:89] found id: ""
	I1006 14:29:30.212563  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.212577  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:30.212582  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:30.212631  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:30.240288  656123 cri.go:89] found id: ""
	I1006 14:29:30.240303  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.240310  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:30.240315  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:30.240365  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:30.267014  656123 cri.go:89] found id: ""
	I1006 14:29:30.267030  656123 logs.go:282] 0 containers: []
	W1006 14:29:30.267038  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:30.267047  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:30.267062  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:30.280742  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:30.280768  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:30.340211  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:30.332660    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.333170    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.334689    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.335152    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.336640    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:30.332660    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.333170    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.334689    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.335152    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:30.336640    7440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:30.340244  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:30.340259  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:30.401294  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:30.401334  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:30.433250  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:30.433271  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:33.006726  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:33.018059  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:33.018122  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:33.045352  656123 cri.go:89] found id: ""
	I1006 14:29:33.045372  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.045380  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:33.045386  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:33.045436  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:33.072234  656123 cri.go:89] found id: ""
	I1006 14:29:33.072252  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.072260  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:33.072265  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:33.072315  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:33.100162  656123 cri.go:89] found id: ""
	I1006 14:29:33.100178  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.100185  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:33.100190  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:33.100258  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:33.128258  656123 cri.go:89] found id: ""
	I1006 14:29:33.128278  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.128288  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:33.128293  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:33.128342  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:33.155116  656123 cri.go:89] found id: ""
	I1006 14:29:33.155146  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.155153  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:33.155158  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:33.155226  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:33.183135  656123 cri.go:89] found id: ""
	I1006 14:29:33.183150  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.183156  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:33.183161  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:33.183243  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:33.209826  656123 cri.go:89] found id: ""
	I1006 14:29:33.209844  656123 logs.go:282] 0 containers: []
	W1006 14:29:33.209851  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:33.209859  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:33.209870  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:33.276119  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:33.276145  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:33.289780  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:33.289805  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:33.346572  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:33.338882    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.339397    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341034    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341541    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.343088    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:33.338882    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.339397    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341034    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.341541    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:33.343088    7581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:33.346592  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:33.346605  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:33.413643  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:33.413673  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:35.944641  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:35.955753  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:35.955806  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:35.981909  656123 cri.go:89] found id: ""
	I1006 14:29:35.981923  656123 logs.go:282] 0 containers: []
	W1006 14:29:35.981930  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:35.981935  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:35.981981  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:36.006585  656123 cri.go:89] found id: ""
	I1006 14:29:36.006605  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.006615  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:36.006621  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:36.006687  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:36.034185  656123 cri.go:89] found id: ""
	I1006 14:29:36.034211  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.034221  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:36.034228  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:36.034279  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:36.060600  656123 cri.go:89] found id: ""
	I1006 14:29:36.060618  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.060625  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:36.060630  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:36.060676  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:36.086928  656123 cri.go:89] found id: ""
	I1006 14:29:36.086945  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.086953  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:36.086957  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:36.087073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:36.112833  656123 cri.go:89] found id: ""
	I1006 14:29:36.112851  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.112875  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:36.112882  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:36.112944  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:36.139970  656123 cri.go:89] found id: ""
	I1006 14:29:36.139991  656123 logs.go:282] 0 containers: []
	W1006 14:29:36.140002  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:36.140014  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:36.140030  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:36.153360  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:36.153383  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:36.209902  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:36.202455    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.202929    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.204558    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.205025    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.206599    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:36.202455    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.202929    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.204558    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.205025    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:36.206599    7695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:36.209916  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:36.209929  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:36.276242  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:36.276264  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:36.305135  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:36.305152  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:38.872573  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:38.884454  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:38.884512  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:38.911055  656123 cri.go:89] found id: ""
	I1006 14:29:38.911071  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.911076  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:38.911081  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:38.911142  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:38.937413  656123 cri.go:89] found id: ""
	I1006 14:29:38.937433  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.937441  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:38.937450  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:38.937529  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:38.963534  656123 cri.go:89] found id: ""
	I1006 14:29:38.963557  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.963564  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:38.963569  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:38.963619  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:38.989811  656123 cri.go:89] found id: ""
	I1006 14:29:38.989825  656123 logs.go:282] 0 containers: []
	W1006 14:29:38.989831  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:38.989836  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:38.989882  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:39.016789  656123 cri.go:89] found id: ""
	I1006 14:29:39.016809  656123 logs.go:282] 0 containers: []
	W1006 14:29:39.016818  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:39.016824  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:39.016876  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:39.042392  656123 cri.go:89] found id: ""
	I1006 14:29:39.042407  656123 logs.go:282] 0 containers: []
	W1006 14:29:39.042413  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:39.042426  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:39.042473  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:39.068836  656123 cri.go:89] found id: ""
	I1006 14:29:39.068852  656123 logs.go:282] 0 containers: []
	W1006 14:29:39.068859  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:39.068867  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:39.068877  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:39.137663  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:39.137689  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:39.151471  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:39.151495  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:39.209176  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:39.201542    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.202107    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.203710    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.204183    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.205768    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:39.201542    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.202107    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.203710    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.204183    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:39.205768    7818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:39.209192  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:39.209218  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:39.274008  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:39.274031  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:41.804322  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:41.815323  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:41.815387  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:41.842055  656123 cri.go:89] found id: ""
	I1006 14:29:41.842070  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.842077  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:41.842082  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:41.842129  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:41.868733  656123 cri.go:89] found id: ""
	I1006 14:29:41.868750  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.868756  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:41.868762  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:41.868809  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:41.896710  656123 cri.go:89] found id: ""
	I1006 14:29:41.896732  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.896742  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:41.896750  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:41.896807  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:41.924854  656123 cri.go:89] found id: ""
	I1006 14:29:41.924875  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.924884  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:41.924891  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:41.924950  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:41.952359  656123 cri.go:89] found id: ""
	I1006 14:29:41.952376  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.952382  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:41.952387  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:41.952453  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:41.979613  656123 cri.go:89] found id: ""
	I1006 14:29:41.979629  656123 logs.go:282] 0 containers: []
	W1006 14:29:41.979636  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:41.979640  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:41.979690  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:42.006904  656123 cri.go:89] found id: ""
	I1006 14:29:42.006923  656123 logs.go:282] 0 containers: []
	W1006 14:29:42.006931  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:42.006941  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:42.006953  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:42.020495  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:42.020518  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:42.078512  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:42.070746    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.071276    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.072881    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.073322    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.074846    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:42.070746    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.071276    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.072881    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.073322    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:42.074846    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:42.078528  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:42.078543  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:42.143410  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:42.143435  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:42.173024  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:42.173042  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:44.740873  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:44.751791  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:44.751852  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:44.777079  656123 cri.go:89] found id: ""
	I1006 14:29:44.777096  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.777103  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:44.777108  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:44.777158  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:44.802137  656123 cri.go:89] found id: ""
	I1006 14:29:44.802151  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.802158  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:44.802163  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:44.802227  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:44.827942  656123 cri.go:89] found id: ""
	I1006 14:29:44.827957  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.827964  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:44.827970  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:44.828014  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:44.853867  656123 cri.go:89] found id: ""
	I1006 14:29:44.853886  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.853894  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:44.853901  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:44.853956  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:44.879907  656123 cri.go:89] found id: ""
	I1006 14:29:44.879923  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.879931  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:44.879937  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:44.879994  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:44.905634  656123 cri.go:89] found id: ""
	I1006 14:29:44.905654  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.905663  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:44.905673  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:44.905731  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:44.932500  656123 cri.go:89] found id: ""
	I1006 14:29:44.932515  656123 logs.go:282] 0 containers: []
	W1006 14:29:44.932524  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:44.932532  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:44.932543  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:44.960602  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:44.960619  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:45.030445  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:45.030474  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:45.043971  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:45.043991  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:45.101230  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:45.093566    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.094142    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.095685    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.096125    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.097721    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:45.093566    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.094142    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.095685    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.096125    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:45.097721    8088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:45.101246  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:45.101259  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:47.666091  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:47.677001  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:47.677061  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:47.703386  656123 cri.go:89] found id: ""
	I1006 14:29:47.703404  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.703412  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:47.703423  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:47.703482  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:47.729961  656123 cri.go:89] found id: ""
	I1006 14:29:47.729978  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.729985  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:47.729998  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:47.730046  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:47.757114  656123 cri.go:89] found id: ""
	I1006 14:29:47.757148  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.757155  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:47.757160  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:47.757220  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:47.783979  656123 cri.go:89] found id: ""
	I1006 14:29:47.783997  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.784004  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:47.784008  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:47.784054  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:47.809265  656123 cri.go:89] found id: ""
	I1006 14:29:47.809280  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.809287  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:47.809292  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:47.809337  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:47.834447  656123 cri.go:89] found id: ""
	I1006 14:29:47.834463  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.834470  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:47.834474  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:47.834518  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:47.860785  656123 cri.go:89] found id: ""
	I1006 14:29:47.860802  656123 logs.go:282] 0 containers: []
	W1006 14:29:47.860808  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:47.860817  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:47.860827  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:47.928576  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:47.928600  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:47.942643  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:47.942669  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:48.000352  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:47.992403    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.992971    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.994566    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.995054    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.996597    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:47.992403    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.992971    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.994566    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.995054    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:47.996597    8197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:48.000373  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:48.000391  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:48.065612  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:48.065640  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:50.596504  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:50.607654  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:50.607709  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:50.634723  656123 cri.go:89] found id: ""
	I1006 14:29:50.634742  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.634751  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:50.634758  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:50.634821  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:50.662103  656123 cri.go:89] found id: ""
	I1006 14:29:50.662122  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.662152  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:50.662160  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:50.662232  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:50.688627  656123 cri.go:89] found id: ""
	I1006 14:29:50.688646  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.688653  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:50.688658  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:50.688719  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:50.715511  656123 cri.go:89] found id: ""
	I1006 14:29:50.715530  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.715540  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:50.715544  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:50.715608  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:50.742597  656123 cri.go:89] found id: ""
	I1006 14:29:50.742612  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.742619  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:50.742624  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:50.742671  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:50.769656  656123 cri.go:89] found id: ""
	I1006 14:29:50.769672  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.769679  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:50.769684  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:50.769740  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:50.797585  656123 cri.go:89] found id: ""
	I1006 14:29:50.797603  656123 logs.go:282] 0 containers: []
	W1006 14:29:50.797611  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:50.797620  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:50.797631  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:50.811635  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:50.811664  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:50.870641  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:50.863296    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.863835    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865405    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865832    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.866946    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:50.863296    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.863835    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865405    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.865832    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:50.866946    8314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:50.870652  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:50.870665  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:50.933617  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:50.933644  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:50.964985  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:50.965003  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:53.535109  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:53.545986  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:53.546039  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:53.571300  656123 cri.go:89] found id: ""
	I1006 14:29:53.571315  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.571322  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:53.571328  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:53.571373  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:53.597111  656123 cri.go:89] found id: ""
	I1006 14:29:53.597126  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.597132  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:53.597137  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:53.597188  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:53.621477  656123 cri.go:89] found id: ""
	I1006 14:29:53.621493  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.621500  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:53.621504  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:53.621550  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:53.647877  656123 cri.go:89] found id: ""
	I1006 14:29:53.647891  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.647898  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:53.647902  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:53.647947  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:53.673269  656123 cri.go:89] found id: ""
	I1006 14:29:53.673284  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.673291  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:53.673296  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:53.673356  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:53.698368  656123 cri.go:89] found id: ""
	I1006 14:29:53.698384  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.698390  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:53.698395  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:53.698446  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:53.724452  656123 cri.go:89] found id: ""
	I1006 14:29:53.724471  656123 logs.go:282] 0 containers: []
	W1006 14:29:53.724481  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:53.724491  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:53.724507  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:53.790937  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:53.790959  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:53.804913  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:53.804929  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:53.862094  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:53.854344    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.854872    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856476    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856953    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.858577    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:53.854344    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.854872    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856476    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.856953    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:53.858577    8433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:53.862111  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:53.862124  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:53.921847  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:53.921867  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:56.452775  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:56.464702  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:56.464760  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:56.491587  656123 cri.go:89] found id: ""
	I1006 14:29:56.491603  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.491609  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:56.491614  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:56.491662  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:56.517138  656123 cri.go:89] found id: ""
	I1006 14:29:56.517157  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.517166  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:56.517170  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:56.517243  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:56.542713  656123 cri.go:89] found id: ""
	I1006 14:29:56.542728  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.542735  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:56.542740  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:56.542787  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:56.568528  656123 cri.go:89] found id: ""
	I1006 14:29:56.568545  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.568554  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:56.568561  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:56.568616  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:56.593881  656123 cri.go:89] found id: ""
	I1006 14:29:56.593897  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.593904  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:56.593909  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:56.593957  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:56.618843  656123 cri.go:89] found id: ""
	I1006 14:29:56.618862  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.618869  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:56.618874  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:56.618931  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:56.644219  656123 cri.go:89] found id: ""
	I1006 14:29:56.644239  656123 logs.go:282] 0 containers: []
	W1006 14:29:56.644249  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:56.644258  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:56.644270  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:56.701345  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:56.693737    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.694299    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.695864    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.696432    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.697961    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:56.693737    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.694299    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.695864    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.696432    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:56.697961    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:56.701372  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:56.701384  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:29:56.762071  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:56.762096  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:56.791634  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:56.791656  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:56.857469  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:56.857492  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:59.371748  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:29:59.383943  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:29:59.384004  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:29:59.411674  656123 cri.go:89] found id: ""
	I1006 14:29:59.411695  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.411703  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:29:59.411712  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:29:59.411829  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:29:59.438177  656123 cri.go:89] found id: ""
	I1006 14:29:59.438193  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.438200  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:29:59.438217  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:29:59.438276  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:29:59.467581  656123 cri.go:89] found id: ""
	I1006 14:29:59.467601  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.467611  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:29:59.467619  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:29:59.467682  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:29:59.496610  656123 cri.go:89] found id: ""
	I1006 14:29:59.496626  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.496633  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:29:59.496638  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:29:59.496684  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:29:59.523799  656123 cri.go:89] found id: ""
	I1006 14:29:59.523815  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.523822  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:29:59.523827  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:29:59.523889  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:29:59.550529  656123 cri.go:89] found id: ""
	I1006 14:29:59.550546  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.550553  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:29:59.550558  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:29:59.550606  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:29:59.577487  656123 cri.go:89] found id: ""
	I1006 14:29:59.577503  656123 logs.go:282] 0 containers: []
	W1006 14:29:59.577509  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:29:59.577518  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:29:59.577529  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:29:59.607238  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:29:59.607260  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:29:59.676960  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:29:59.676986  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:29:59.690846  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:29:59.690869  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:29:59.749311  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:29:59.741475    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.742053    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.743670    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.744122    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.745515    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:29:59.741475    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.742053    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.743670    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.744122    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:29:59.745515    8704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:29:59.749329  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:29:59.749339  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:02.310264  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:02.321519  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:02.321570  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:02.347821  656123 cri.go:89] found id: ""
	I1006 14:30:02.347842  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.347852  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:02.347860  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:02.347920  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:02.373381  656123 cri.go:89] found id: ""
	I1006 14:30:02.373404  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.373412  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:02.373418  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:02.373462  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:02.401169  656123 cri.go:89] found id: ""
	I1006 14:30:02.401189  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.401199  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:02.401215  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:02.401271  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:02.427774  656123 cri.go:89] found id: ""
	I1006 14:30:02.427790  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.427799  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:02.427806  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:02.427858  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:02.453624  656123 cri.go:89] found id: ""
	I1006 14:30:02.453642  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.453652  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:02.453659  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:02.453725  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:02.480503  656123 cri.go:89] found id: ""
	I1006 14:30:02.480520  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.480526  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:02.480531  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:02.480581  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:02.506624  656123 cri.go:89] found id: ""
	I1006 14:30:02.506643  656123 logs.go:282] 0 containers: []
	W1006 14:30:02.506652  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:02.506662  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:02.506675  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:02.575030  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:02.575055  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:02.589240  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:02.589266  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:02.647840  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:02.640193    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.640759    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642327    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642757    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.644424    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:02.640193    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.640759    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642327    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.642757    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:02.644424    8804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:02.647855  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:02.647866  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:02.710907  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:02.710932  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:05.243556  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:05.254230  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:05.254287  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:05.279490  656123 cri.go:89] found id: ""
	I1006 14:30:05.279506  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.279514  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:05.279520  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:05.279572  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:05.305513  656123 cri.go:89] found id: ""
	I1006 14:30:05.305533  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.305539  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:05.305544  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:05.305591  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:05.331962  656123 cri.go:89] found id: ""
	I1006 14:30:05.331981  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.331990  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:05.331996  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:05.332058  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:05.357789  656123 cri.go:89] found id: ""
	I1006 14:30:05.357807  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.357815  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:05.357820  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:05.357866  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:05.383637  656123 cri.go:89] found id: ""
	I1006 14:30:05.383658  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.383664  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:05.383669  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:05.383715  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:05.408314  656123 cri.go:89] found id: ""
	I1006 14:30:05.408332  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.408341  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:05.408348  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:05.408418  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:05.433843  656123 cri.go:89] found id: ""
	I1006 14:30:05.433861  656123 logs.go:282] 0 containers: []
	W1006 14:30:05.433867  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:05.433876  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:05.433888  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:05.494147  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:05.494176  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:05.523997  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:05.524016  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:05.591019  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:05.591039  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:05.604531  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:05.604546  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:05.660873  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:05.653677    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.654169    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.655684    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.656053    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.657599    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:05.653677    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.654169    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.655684    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.656053    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:05.657599    8938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:08.162635  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:08.173492  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:08.173538  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:08.199879  656123 cri.go:89] found id: ""
	I1006 14:30:08.199896  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.199902  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:08.199907  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:08.199954  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:08.225501  656123 cri.go:89] found id: ""
	I1006 14:30:08.225520  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.225531  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:08.225537  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:08.225598  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:08.251711  656123 cri.go:89] found id: ""
	I1006 14:30:08.251730  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.251737  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:08.251742  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:08.251790  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:08.277559  656123 cri.go:89] found id: ""
	I1006 14:30:08.277575  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.277584  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:08.277594  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:08.277656  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:08.303749  656123 cri.go:89] found id: ""
	I1006 14:30:08.303767  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.303776  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:08.303781  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:08.303830  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:08.329034  656123 cri.go:89] found id: ""
	I1006 14:30:08.329053  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.329059  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:08.329064  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:08.329111  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:08.354393  656123 cri.go:89] found id: ""
	I1006 14:30:08.354409  656123 logs.go:282] 0 containers: []
	W1006 14:30:08.354416  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:08.354423  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:08.354434  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:08.416780  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:08.416799  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:08.444904  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:08.444925  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:08.518089  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:08.518111  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:08.531108  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:08.531124  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:08.586529  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:08.578762    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.579607    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581199    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581663    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.583179    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:08.578762    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.579607    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581199    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.581663    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:08.583179    9065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:11.087318  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:11.098631  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:11.098701  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:11.125423  656123 cri.go:89] found id: ""
	I1006 14:30:11.125441  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.125450  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:11.125456  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:11.125520  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:11.154785  656123 cri.go:89] found id: ""
	I1006 14:30:11.154803  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.154810  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:11.154815  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:11.154868  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:11.180879  656123 cri.go:89] found id: ""
	I1006 14:30:11.180899  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.180908  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:11.180915  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:11.180979  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:11.207281  656123 cri.go:89] found id: ""
	I1006 14:30:11.207308  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.207318  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:11.207326  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:11.207391  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:11.234275  656123 cri.go:89] found id: ""
	I1006 14:30:11.234293  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.234302  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:11.234308  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:11.234379  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:11.261486  656123 cri.go:89] found id: ""
	I1006 14:30:11.261502  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.261508  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:11.261514  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:11.261561  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:11.287155  656123 cri.go:89] found id: ""
	I1006 14:30:11.287173  656123 logs.go:282] 0 containers: []
	W1006 14:30:11.287180  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:11.287189  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:11.287223  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:11.358359  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:11.358383  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:11.372359  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:11.372385  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:11.430998  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:11.423269    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.423805    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425394    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425911    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.427479    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:11.423269    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.423805    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425394    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.425911    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:11.427479    9166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:11.431012  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:11.431023  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:11.498514  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:11.498538  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:14.030847  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:14.041715  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:14.041763  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:14.067907  656123 cri.go:89] found id: ""
	I1006 14:30:14.067927  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.067938  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:14.067944  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:14.067992  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:14.093781  656123 cri.go:89] found id: ""
	I1006 14:30:14.093800  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.093810  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:14.093817  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:14.093873  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:14.120737  656123 cri.go:89] found id: ""
	I1006 14:30:14.120752  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.120759  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:14.120765  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:14.120825  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:14.148551  656123 cri.go:89] found id: ""
	I1006 14:30:14.148567  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.148575  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:14.148580  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:14.148632  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:14.174943  656123 cri.go:89] found id: ""
	I1006 14:30:14.174960  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.174965  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:14.174970  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:14.175032  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:14.201148  656123 cri.go:89] found id: ""
	I1006 14:30:14.201163  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.201172  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:14.201178  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:14.201245  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:14.228046  656123 cri.go:89] found id: ""
	I1006 14:30:14.228062  656123 logs.go:282] 0 containers: []
	W1006 14:30:14.228068  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:14.228077  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:14.228087  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:14.300889  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:14.300914  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:14.314304  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:14.314326  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:14.370818  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:14.363282    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.363836    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365383    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365793    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.367329    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:14.363282    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.363836    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365383    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.365793    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:14.367329    9300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:14.370827  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:14.370838  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:14.431681  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:14.431704  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:16.961397  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:16.973165  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:16.973247  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:17.001273  656123 cri.go:89] found id: ""
	I1006 14:30:17.001291  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.001297  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:17.001302  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:17.001354  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:17.027536  656123 cri.go:89] found id: ""
	I1006 14:30:17.027557  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.027565  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:17.027570  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:17.027622  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:17.054924  656123 cri.go:89] found id: ""
	I1006 14:30:17.054940  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.054947  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:17.054953  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:17.055000  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:17.083443  656123 cri.go:89] found id: ""
	I1006 14:30:17.083460  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.083467  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:17.083472  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:17.083522  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:17.111442  656123 cri.go:89] found id: ""
	I1006 14:30:17.111459  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.111467  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:17.111474  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:17.111530  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:17.138310  656123 cri.go:89] found id: ""
	I1006 14:30:17.138329  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.138338  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:17.138344  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:17.138393  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:17.166360  656123 cri.go:89] found id: ""
	I1006 14:30:17.166389  656123 logs.go:282] 0 containers: []
	W1006 14:30:17.166400  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:17.166411  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:17.166427  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:17.238488  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:17.238516  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:17.252654  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:17.252688  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:17.312602  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:17.304484    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.305059    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.306672    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.307166    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.308768    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:17.304484    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.305059    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.306672    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.307166    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:17.308768    9418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:17.312623  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:17.312634  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:17.375185  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:17.375222  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:19.907611  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:19.918724  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:19.918776  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:19.945244  656123 cri.go:89] found id: ""
	I1006 14:30:19.945264  656123 logs.go:282] 0 containers: []
	W1006 14:30:19.945277  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:19.945285  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:19.945343  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:19.972919  656123 cri.go:89] found id: ""
	I1006 14:30:19.972939  656123 logs.go:282] 0 containers: []
	W1006 14:30:19.972949  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:19.972955  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:19.973008  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:19.999841  656123 cri.go:89] found id: ""
	I1006 14:30:19.999858  656123 logs.go:282] 0 containers: []
	W1006 14:30:19.999864  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:19.999870  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:19.999926  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:20.027271  656123 cri.go:89] found id: ""
	I1006 14:30:20.027290  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.027299  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:20.027306  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:20.027364  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:20.054297  656123 cri.go:89] found id: ""
	I1006 14:30:20.054313  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.054320  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:20.054325  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:20.054380  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:20.081354  656123 cri.go:89] found id: ""
	I1006 14:30:20.081374  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.081380  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:20.081386  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:20.081438  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:20.108256  656123 cri.go:89] found id: ""
	I1006 14:30:20.108273  656123 logs.go:282] 0 containers: []
	W1006 14:30:20.108280  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:20.108289  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:20.108303  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:20.177476  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:20.177501  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:20.191396  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:20.191419  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:20.250424  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:20.242535    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.243129    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.244697    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.245110    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.246705    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:20.242535    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.243129    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.244697    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.245110    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:20.246705    9540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:20.250437  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:20.250448  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:20.311404  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:20.311430  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:22.842482  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:22.854386  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:22.854451  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:22.882144  656123 cri.go:89] found id: ""
	I1006 14:30:22.882160  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.882167  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:22.882176  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:22.882244  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:22.908078  656123 cri.go:89] found id: ""
	I1006 14:30:22.908097  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.908106  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:22.908112  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:22.908163  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:22.934596  656123 cri.go:89] found id: ""
	I1006 14:30:22.934613  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.934620  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:22.934624  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:22.934673  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:22.961803  656123 cri.go:89] found id: ""
	I1006 14:30:22.961821  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.961830  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:22.961837  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:22.961889  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:22.988277  656123 cri.go:89] found id: ""
	I1006 14:30:22.988293  656123 logs.go:282] 0 containers: []
	W1006 14:30:22.988300  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:22.988305  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:22.988355  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:23.015411  656123 cri.go:89] found id: ""
	I1006 14:30:23.015428  656123 logs.go:282] 0 containers: []
	W1006 14:30:23.015436  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:23.015441  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:23.015494  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:23.042508  656123 cri.go:89] found id: ""
	I1006 14:30:23.042526  656123 logs.go:282] 0 containers: []
	W1006 14:30:23.042534  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:23.042545  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:23.042558  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:23.110932  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:23.110957  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:23.125294  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:23.125322  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:23.185388  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:23.177268    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.177825    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179508    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179961    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.181496    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:23.177268    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.177825    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179508    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.179961    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:23.181496    9660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:23.185405  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:23.185418  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:23.246673  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:23.246696  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:25.778383  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:25.789490  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:25.789539  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:25.816713  656123 cri.go:89] found id: ""
	I1006 14:30:25.816731  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.816737  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:25.816742  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:25.816792  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:25.844676  656123 cri.go:89] found id: ""
	I1006 14:30:25.844699  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.844708  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:25.844716  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:25.844784  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:25.872027  656123 cri.go:89] found id: ""
	I1006 14:30:25.872046  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.872054  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:25.872059  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:25.872115  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:25.898454  656123 cri.go:89] found id: ""
	I1006 14:30:25.898473  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.898480  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:25.898486  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:25.898548  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:25.926559  656123 cri.go:89] found id: ""
	I1006 14:30:25.926576  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.926583  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:25.926589  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:25.926638  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:25.953516  656123 cri.go:89] found id: ""
	I1006 14:30:25.953535  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.953544  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:25.953562  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:25.953634  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:25.980962  656123 cri.go:89] found id: ""
	I1006 14:30:25.980978  656123 logs.go:282] 0 containers: []
	W1006 14:30:25.980986  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:25.980994  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:25.981012  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:26.052486  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:26.052510  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:26.066688  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:26.066710  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:26.126899  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:26.118941    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.119633    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121265    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121767    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.123331    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:26.118941    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.119633    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121265    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.121767    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:26.123331    9785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:26.126912  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:26.126924  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:26.187018  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:26.187047  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:28.721028  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:28.732295  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:28.732361  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:28.759561  656123 cri.go:89] found id: ""
	I1006 14:30:28.759583  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.759592  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:28.759598  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:28.759651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:28.787553  656123 cri.go:89] found id: ""
	I1006 14:30:28.787573  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.787584  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:28.787598  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:28.787653  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:28.813499  656123 cri.go:89] found id: ""
	I1006 14:30:28.813520  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.813529  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:28.813535  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:28.813591  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:28.840441  656123 cri.go:89] found id: ""
	I1006 14:30:28.840462  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.840468  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:28.840474  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:28.840523  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:28.867632  656123 cri.go:89] found id: ""
	I1006 14:30:28.867647  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.867654  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:28.867659  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:28.867709  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:28.895005  656123 cri.go:89] found id: ""
	I1006 14:30:28.895023  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.895029  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:28.895034  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:28.895082  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:28.920965  656123 cri.go:89] found id: ""
	I1006 14:30:28.920983  656123 logs.go:282] 0 containers: []
	W1006 14:30:28.920993  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:28.921003  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:28.921017  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:28.981278  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:28.981302  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:29.010983  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:29.011000  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:29.078541  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:29.078565  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:29.092586  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:29.092613  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:29.151129  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:29.143937    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.144542    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146112    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146650    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.147708    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:29.143937    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.144542    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146112    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.146650    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:29.147708    9927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:31.652214  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:31.663823  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:31.663891  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:31.690576  656123 cri.go:89] found id: ""
	I1006 14:30:31.690596  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.690606  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:31.690613  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:31.690666  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:31.716874  656123 cri.go:89] found id: ""
	I1006 14:30:31.716894  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.716902  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:31.716907  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:31.716956  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:31.744572  656123 cri.go:89] found id: ""
	I1006 14:30:31.744594  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.744603  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:31.744611  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:31.744681  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:31.771539  656123 cri.go:89] found id: ""
	I1006 14:30:31.771556  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.771565  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:31.771575  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:31.771637  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:31.798102  656123 cri.go:89] found id: ""
	I1006 14:30:31.798118  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.798125  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:31.798131  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:31.798175  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:31.825905  656123 cri.go:89] found id: ""
	I1006 14:30:31.825921  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.825928  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:31.825933  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:31.825985  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:31.853474  656123 cri.go:89] found id: ""
	I1006 14:30:31.853489  656123 logs.go:282] 0 containers: []
	W1006 14:30:31.853496  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:31.853504  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:31.853515  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:31.925541  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:31.925566  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:31.939650  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:31.939676  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:31.998586  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:31.990853   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.991461   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.992961   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.993424   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.994933   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:31.990853   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.991461   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.992961   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.993424   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:31.994933   10031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:31.998595  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:31.998606  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:32.058322  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:32.058348  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:34.591129  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:34.602495  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:34.602545  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:34.628973  656123 cri.go:89] found id: ""
	I1006 14:30:34.628991  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.628998  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:34.629003  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:34.629048  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:34.654917  656123 cri.go:89] found id: ""
	I1006 14:30:34.654934  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.654941  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:34.654945  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:34.654997  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:34.680385  656123 cri.go:89] found id: ""
	I1006 14:30:34.680401  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.680408  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:34.680413  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:34.680459  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:34.705914  656123 cri.go:89] found id: ""
	I1006 14:30:34.705929  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.705935  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:34.705940  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:34.705989  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:34.731580  656123 cri.go:89] found id: ""
	I1006 14:30:34.731597  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.731604  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:34.731609  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:34.731661  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:34.756200  656123 cri.go:89] found id: ""
	I1006 14:30:34.756232  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.756239  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:34.756244  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:34.756293  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:34.781770  656123 cri.go:89] found id: ""
	I1006 14:30:34.781785  656123 logs.go:282] 0 containers: []
	W1006 14:30:34.781794  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:34.781802  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:34.781813  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:34.850861  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:34.850884  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:34.864688  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:34.864706  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:34.921713  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:34.914358   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.914917   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916495   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916918   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.918459   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:34.914358   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.914917   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916495   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.916918   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:34.918459   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:34.921723  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:34.921733  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:34.985884  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:34.985906  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:37.516053  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:37.526705  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:37.526751  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:37.551472  656123 cri.go:89] found id: ""
	I1006 14:30:37.551490  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.551500  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:37.551507  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:37.551561  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:37.576603  656123 cri.go:89] found id: ""
	I1006 14:30:37.576619  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.576626  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:37.576630  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:37.576674  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:37.602217  656123 cri.go:89] found id: ""
	I1006 14:30:37.602241  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.602250  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:37.602254  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:37.602300  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:37.627547  656123 cri.go:89] found id: ""
	I1006 14:30:37.627561  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.627567  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:37.627572  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:37.627614  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:37.652434  656123 cri.go:89] found id: ""
	I1006 14:30:37.652451  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.652460  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:37.652467  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:37.652519  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:37.677543  656123 cri.go:89] found id: ""
	I1006 14:30:37.677558  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.677564  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:37.677569  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:37.677611  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:37.701695  656123 cri.go:89] found id: ""
	I1006 14:30:37.701711  656123 logs.go:282] 0 containers: []
	W1006 14:30:37.701718  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:37.701727  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:37.701737  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:37.730832  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:37.730852  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:37.799686  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:37.799708  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:37.813081  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:37.813106  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:37.869274  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:37.861812   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.862406   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.863958   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.864398   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.865877   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:37.861812   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.862406   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.863958   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.864398   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:37.865877   10287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:37.869285  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:37.869297  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:40.432488  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:40.443779  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:40.443830  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:40.471502  656123 cri.go:89] found id: ""
	I1006 14:30:40.471520  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.471528  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:40.471533  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:40.471591  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:40.498418  656123 cri.go:89] found id: ""
	I1006 14:30:40.498435  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.498442  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:40.498447  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:40.498495  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:40.525987  656123 cri.go:89] found id: ""
	I1006 14:30:40.526003  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.526009  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:40.526015  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:40.526073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:40.554161  656123 cri.go:89] found id: ""
	I1006 14:30:40.554180  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.554190  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:40.554197  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:40.554262  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:40.581168  656123 cri.go:89] found id: ""
	I1006 14:30:40.581186  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.581193  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:40.581198  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:40.581272  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:40.608862  656123 cri.go:89] found id: ""
	I1006 14:30:40.608879  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.608890  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:40.608899  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:40.608951  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:40.636053  656123 cri.go:89] found id: ""
	I1006 14:30:40.636069  656123 logs.go:282] 0 containers: []
	W1006 14:30:40.636076  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:40.636084  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:40.636096  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:40.649832  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:40.649854  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:40.708143  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:40.700302   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.700800   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702328   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702794   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.704437   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:40.700302   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.700800   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702328   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.702794   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:40.704437   10406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:40.708157  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:40.708173  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:40.767571  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:40.767598  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:40.798425  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:40.798447  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:43.369172  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:43.380275  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:43.380336  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:43.407137  656123 cri.go:89] found id: ""
	I1006 14:30:43.407166  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.407172  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:43.407178  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:43.407255  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:43.434264  656123 cri.go:89] found id: ""
	I1006 14:30:43.434280  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.434286  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:43.434291  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:43.434344  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:43.460492  656123 cri.go:89] found id: ""
	I1006 14:30:43.460511  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.460521  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:43.460527  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:43.460579  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:43.486096  656123 cri.go:89] found id: ""
	I1006 14:30:43.486112  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.486118  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:43.486123  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:43.486180  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:43.512166  656123 cri.go:89] found id: ""
	I1006 14:30:43.512182  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.512189  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:43.512200  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:43.512274  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:43.540182  656123 cri.go:89] found id: ""
	I1006 14:30:43.540198  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.540225  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:43.540231  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:43.540281  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:43.566257  656123 cri.go:89] found id: ""
	I1006 14:30:43.566276  656123 logs.go:282] 0 containers: []
	W1006 14:30:43.566283  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:43.566291  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:43.566301  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:43.633282  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:43.633308  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:43.646525  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:43.646547  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:43.703245  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:43.695412   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.695958   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.697564   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.698089   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.699634   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:43.695412   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.695958   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.697564   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.698089   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:43.699634   10527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:43.703258  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:43.703271  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:43.763009  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:43.763030  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:46.294610  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:46.306608  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:46.306657  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:46.333990  656123 cri.go:89] found id: ""
	I1006 14:30:46.334010  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.334017  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:46.334023  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:46.334071  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:46.360169  656123 cri.go:89] found id: ""
	I1006 14:30:46.360186  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.360193  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:46.360197  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:46.360274  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:46.386526  656123 cri.go:89] found id: ""
	I1006 14:30:46.386543  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.386552  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:46.386559  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:46.386618  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:46.412732  656123 cri.go:89] found id: ""
	I1006 14:30:46.412755  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.412761  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:46.412768  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:46.412819  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:46.437943  656123 cri.go:89] found id: ""
	I1006 14:30:46.437961  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.437969  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:46.437975  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:46.438022  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:46.462227  656123 cri.go:89] found id: ""
	I1006 14:30:46.462245  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.462254  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:46.462259  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:46.462308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:46.486426  656123 cri.go:89] found id: ""
	I1006 14:30:46.486446  656123 logs.go:282] 0 containers: []
	W1006 14:30:46.486455  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:46.486465  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:46.486478  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:46.555804  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:46.555824  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:46.568953  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:46.568977  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:46.625518  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:46.616895   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618433   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618998   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.620647   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.621154   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:46.616895   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618433   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.618998   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.620647   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:46.621154   10651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:46.625532  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:46.625542  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:46.689026  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:46.689045  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:49.220452  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:49.231376  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:49.231437  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:49.257464  656123 cri.go:89] found id: ""
	I1006 14:30:49.257484  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.257492  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:49.257499  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:49.257549  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:49.282291  656123 cri.go:89] found id: ""
	I1006 14:30:49.282305  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.282315  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:49.282322  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:49.282374  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:49.307787  656123 cri.go:89] found id: ""
	I1006 14:30:49.307806  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.307815  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:49.307821  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:49.307872  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:49.333154  656123 cri.go:89] found id: ""
	I1006 14:30:49.333172  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.333179  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:49.333185  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:49.333252  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:49.359161  656123 cri.go:89] found id: ""
	I1006 14:30:49.359175  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.359183  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:49.359188  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:49.359253  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:49.385380  656123 cri.go:89] found id: ""
	I1006 14:30:49.385398  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.385405  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:49.385410  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:49.385461  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:49.409982  656123 cri.go:89] found id: ""
	I1006 14:30:49.410009  656123 logs.go:282] 0 containers: []
	W1006 14:30:49.410020  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:49.410030  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:49.410043  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:49.470637  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:49.470662  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:49.498568  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:49.498585  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:49.568338  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:49.568355  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:49.581842  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:49.581863  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:49.638518  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:49.631016   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.631575   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633164   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633595   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.635088   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:49.631016   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.631575   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633164   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.633595   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:49.635088   10785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:52.139121  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:52.151341  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:52.151400  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:52.180909  656123 cri.go:89] found id: ""
	I1006 14:30:52.180929  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.180937  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:52.180943  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:52.181004  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:52.212664  656123 cri.go:89] found id: ""
	I1006 14:30:52.212687  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.212695  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:52.212700  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:52.212753  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:52.242804  656123 cri.go:89] found id: ""
	I1006 14:30:52.242824  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.242833  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:52.242840  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:52.242906  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:52.275408  656123 cri.go:89] found id: ""
	I1006 14:30:52.275428  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.275437  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:52.275443  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:52.275511  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:52.304772  656123 cri.go:89] found id: ""
	I1006 14:30:52.304791  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.304797  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:52.304802  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:52.304855  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:52.334628  656123 cri.go:89] found id: ""
	I1006 14:30:52.334646  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.334665  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:52.334672  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:52.334744  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:52.363535  656123 cri.go:89] found id: ""
	I1006 14:30:52.363551  656123 logs.go:282] 0 containers: []
	W1006 14:30:52.363558  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:52.363567  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:52.363578  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:52.395148  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:52.395172  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:52.467790  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:52.467818  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:52.483589  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:52.483613  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:52.547153  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:52.538900   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.539522   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541194   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541724   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.543496   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:52.538900   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.539522   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541194   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.541724   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:52.543496   10918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:52.547168  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:52.547191  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:55.111539  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:55.123376  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:55.123432  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:55.151263  656123 cri.go:89] found id: ""
	I1006 14:30:55.151278  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.151285  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:55.151289  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:55.151354  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:55.179099  656123 cri.go:89] found id: ""
	I1006 14:30:55.179116  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.179123  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:55.179127  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:55.179177  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:55.207568  656123 cri.go:89] found id: ""
	I1006 14:30:55.207586  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.207594  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:55.207599  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:55.207653  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:55.236037  656123 cri.go:89] found id: ""
	I1006 14:30:55.236058  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.236068  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:55.236075  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:55.236132  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:55.263286  656123 cri.go:89] found id: ""
	I1006 14:30:55.263304  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.263311  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:55.263316  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:55.263416  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:55.291167  656123 cri.go:89] found id: ""
	I1006 14:30:55.291189  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.291197  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:55.291217  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:55.291271  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:55.318410  656123 cri.go:89] found id: ""
	I1006 14:30:55.318430  656123 logs.go:282] 0 containers: []
	W1006 14:30:55.318440  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:55.318450  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:55.318461  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:55.385160  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:55.385187  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:55.399050  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:55.399076  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:55.458418  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:55.450518   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.451123   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.452726   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.453351   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.454908   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:55.450518   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.451123   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.452726   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.453351   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:55.454908   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:55.458432  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:55.458448  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:55.524792  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:55.524816  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:30:58.057888  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:30:58.068966  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:30:58.069020  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:30:58.096398  656123 cri.go:89] found id: ""
	I1006 14:30:58.096415  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.096423  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:30:58.096428  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:30:58.096477  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:30:58.123183  656123 cri.go:89] found id: ""
	I1006 14:30:58.123199  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.123218  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:30:58.123225  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:30:58.123278  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:30:58.149129  656123 cri.go:89] found id: ""
	I1006 14:30:58.149145  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.149152  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:30:58.149156  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:30:58.149231  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:30:58.176154  656123 cri.go:89] found id: ""
	I1006 14:30:58.176171  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.176178  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:30:58.176183  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:30:58.176260  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:30:58.202224  656123 cri.go:89] found id: ""
	I1006 14:30:58.202244  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.202252  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:30:58.202257  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:30:58.202308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:30:58.228701  656123 cri.go:89] found id: ""
	I1006 14:30:58.228722  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.228731  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:30:58.228738  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:30:58.228789  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:30:58.255405  656123 cri.go:89] found id: ""
	I1006 14:30:58.255424  656123 logs.go:282] 0 containers: []
	W1006 14:30:58.255434  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:30:58.255445  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:30:58.255463  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:30:58.326378  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:30:58.326403  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:30:58.340088  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:30:58.340113  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:30:58.398424  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:30:58.390470   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.391705   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.392182   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.393789   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.394272   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:30:58.390470   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.391705   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.392182   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.393789   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:30:58.394272   11153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:30:58.398434  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:30:58.398444  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:30:58.458532  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:30:58.458557  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:00.988890  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:01.000117  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:01.000187  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:01.027975  656123 cri.go:89] found id: ""
	I1006 14:31:01.027994  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.028005  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:01.028011  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:01.028073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:01.057671  656123 cri.go:89] found id: ""
	I1006 14:31:01.057689  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.057695  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:01.057703  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:01.057753  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:01.086296  656123 cri.go:89] found id: ""
	I1006 14:31:01.086312  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.086319  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:01.086324  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:01.086380  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:01.115804  656123 cri.go:89] found id: ""
	I1006 14:31:01.115828  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.115838  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:01.115846  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:01.115914  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:01.143626  656123 cri.go:89] found id: ""
	I1006 14:31:01.143652  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.143662  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:01.143669  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:01.143730  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:01.173329  656123 cri.go:89] found id: ""
	I1006 14:31:01.173351  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.173358  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:01.173363  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:01.173425  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:01.202447  656123 cri.go:89] found id: ""
	I1006 14:31:01.202464  656123 logs.go:282] 0 containers: []
	W1006 14:31:01.202472  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:01.202481  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:01.202493  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:01.264676  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:01.255680   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.256306   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.258878   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.259545   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.261098   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:01.255680   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.256306   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.258878   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.259545   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:01.261098   11269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:01.264688  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:01.264701  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:01.325726  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:01.325755  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:01.357935  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:01.357956  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:01.426320  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:01.426346  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:03.942695  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:03.954165  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:03.954257  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:03.982933  656123 cri.go:89] found id: ""
	I1006 14:31:03.982952  656123 logs.go:282] 0 containers: []
	W1006 14:31:03.982960  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:03.982966  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:03.983023  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:04.010750  656123 cri.go:89] found id: ""
	I1006 14:31:04.010768  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.010775  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:04.010780  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:04.010845  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:04.038408  656123 cri.go:89] found id: ""
	I1006 14:31:04.038430  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.038440  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:04.038446  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:04.038506  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:04.065987  656123 cri.go:89] found id: ""
	I1006 14:31:04.066004  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.066011  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:04.066017  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:04.066064  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:04.092615  656123 cri.go:89] found id: ""
	I1006 14:31:04.092635  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.092645  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:04.092651  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:04.092715  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:04.120296  656123 cri.go:89] found id: ""
	I1006 14:31:04.120314  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.120324  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:04.120331  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:04.120392  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:04.148258  656123 cri.go:89] found id: ""
	I1006 14:31:04.148275  656123 logs.go:282] 0 containers: []
	W1006 14:31:04.148282  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:04.148291  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:04.148303  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:04.162693  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:04.162716  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:04.222565  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:04.214872   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.215499   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.216999   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.217486   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.218767   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:04.214872   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.215499   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.216999   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.217486   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:04.218767   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:04.222576  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:04.222588  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:04.284619  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:04.284645  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:04.315049  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:04.315067  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:06.880125  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:06.891035  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:06.891100  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:06.919022  656123 cri.go:89] found id: ""
	I1006 14:31:06.919039  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.919054  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:06.919059  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:06.919109  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:06.945007  656123 cri.go:89] found id: ""
	I1006 14:31:06.945023  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.945030  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:06.945035  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:06.945082  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:06.971114  656123 cri.go:89] found id: ""
	I1006 14:31:06.971140  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.971150  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:06.971156  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:06.971219  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:06.997325  656123 cri.go:89] found id: ""
	I1006 14:31:06.997341  656123 logs.go:282] 0 containers: []
	W1006 14:31:06.997349  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:06.997354  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:06.997399  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:07.024483  656123 cri.go:89] found id: ""
	I1006 14:31:07.024503  656123 logs.go:282] 0 containers: []
	W1006 14:31:07.024510  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:07.024515  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:07.024563  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:07.050897  656123 cri.go:89] found id: ""
	I1006 14:31:07.050916  656123 logs.go:282] 0 containers: []
	W1006 14:31:07.050924  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:07.050929  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:07.050988  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:07.076681  656123 cri.go:89] found id: ""
	I1006 14:31:07.076698  656123 logs.go:282] 0 containers: []
	W1006 14:31:07.076706  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:07.076716  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:07.076730  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:07.137015  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:07.137039  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:07.167691  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:07.167711  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:07.236752  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:07.236774  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:07.250497  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:07.250519  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:07.307410  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:07.299651   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.300252   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.301817   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.302267   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.303782   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:07.299651   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.300252   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.301817   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.302267   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:07.303782   11539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:09.809076  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:09.819941  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:09.819991  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:09.847047  656123 cri.go:89] found id: ""
	I1006 14:31:09.847066  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.847075  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:09.847082  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:09.847151  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:09.873840  656123 cri.go:89] found id: ""
	I1006 14:31:09.873856  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.873862  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:09.873867  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:09.873923  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:09.900892  656123 cri.go:89] found id: ""
	I1006 14:31:09.900908  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.900914  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:09.900920  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:09.900967  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:09.927801  656123 cri.go:89] found id: ""
	I1006 14:31:09.927822  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.927835  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:09.927842  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:09.927892  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:09.955400  656123 cri.go:89] found id: ""
	I1006 14:31:09.955420  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.955428  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:09.955433  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:09.955484  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:09.981624  656123 cri.go:89] found id: ""
	I1006 14:31:09.981640  656123 logs.go:282] 0 containers: []
	W1006 14:31:09.981647  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:09.981653  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:09.981700  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:10.009693  656123 cri.go:89] found id: ""
	I1006 14:31:10.009710  656123 logs.go:282] 0 containers: []
	W1006 14:31:10.009716  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:10.009724  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:10.009735  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:10.075460  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:10.075492  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:10.089300  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:10.089327  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:10.148123  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:10.140282   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.140860   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142433   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142866   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.144460   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:10.140282   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.140860   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142433   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.142866   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:10.144460   11647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:10.148152  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:10.148165  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:10.210442  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:10.210473  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:12.742692  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:12.754226  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:12.754289  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:12.783228  656123 cri.go:89] found id: ""
	I1006 14:31:12.783249  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.783256  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:12.783263  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:12.783324  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:12.811693  656123 cri.go:89] found id: ""
	I1006 14:31:12.811715  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.811725  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:12.811732  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:12.811782  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:12.840310  656123 cri.go:89] found id: ""
	I1006 14:31:12.840332  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.840342  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:12.840348  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:12.840402  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:12.869101  656123 cri.go:89] found id: ""
	I1006 14:31:12.869123  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.869131  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:12.869137  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:12.869189  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:12.897605  656123 cri.go:89] found id: ""
	I1006 14:31:12.897623  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.897630  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:12.897635  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:12.897693  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:12.926227  656123 cri.go:89] found id: ""
	I1006 14:31:12.926247  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.926254  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:12.926260  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:12.926308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:12.955298  656123 cri.go:89] found id: ""
	I1006 14:31:12.955315  656123 logs.go:282] 0 containers: []
	W1006 14:31:12.955324  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:12.955334  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:12.955348  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:13.021936  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:13.021962  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:13.036093  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:13.036115  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:13.096234  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:13.088298   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.088908   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090517   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090973   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.092543   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:13.088298   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.088908   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090517   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.090973   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:13.092543   11777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:13.096246  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:13.096258  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:13.156934  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:13.156960  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:15.689959  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:15.701228  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:15.701301  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:15.727030  656123 cri.go:89] found id: ""
	I1006 14:31:15.727050  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.727059  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:15.727067  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:15.727119  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:15.753392  656123 cri.go:89] found id: ""
	I1006 14:31:15.753409  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.753417  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:15.753421  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:15.753471  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:15.780750  656123 cri.go:89] found id: ""
	I1006 14:31:15.780775  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.780783  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:15.780788  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:15.780842  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:15.807372  656123 cri.go:89] found id: ""
	I1006 14:31:15.807388  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.807401  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:15.807406  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:15.807461  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:15.834188  656123 cri.go:89] found id: ""
	I1006 14:31:15.834222  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.834233  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:15.834240  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:15.834293  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:15.861606  656123 cri.go:89] found id: ""
	I1006 14:31:15.861624  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.861631  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:15.861636  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:15.861702  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:15.888991  656123 cri.go:89] found id: ""
	I1006 14:31:15.889007  656123 logs.go:282] 0 containers: []
	W1006 14:31:15.889014  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:15.889022  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:15.889035  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:15.956002  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:15.956024  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:15.969830  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:15.969850  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:16.026629  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:16.019009   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.019537   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021047   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021513   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.023044   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:16.019009   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.019537   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021047   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.021513   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:16.023044   11895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:16.026643  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:16.026656  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:16.085192  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:16.085220  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:18.616289  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:18.627239  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:18.627304  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:18.655298  656123 cri.go:89] found id: ""
	I1006 14:31:18.655318  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.655327  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:18.655334  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:18.655392  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:18.682590  656123 cri.go:89] found id: ""
	I1006 14:31:18.682609  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.682616  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:18.682623  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:18.682684  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:18.709329  656123 cri.go:89] found id: ""
	I1006 14:31:18.709349  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.709359  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:18.709366  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:18.709428  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:18.735272  656123 cri.go:89] found id: ""
	I1006 14:31:18.735292  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.735302  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:18.735309  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:18.735370  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:18.761956  656123 cri.go:89] found id: ""
	I1006 14:31:18.761973  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.761980  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:18.761984  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:18.762047  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:18.788186  656123 cri.go:89] found id: ""
	I1006 14:31:18.788224  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.788234  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:18.788241  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:18.788293  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:18.814751  656123 cri.go:89] found id: ""
	I1006 14:31:18.814768  656123 logs.go:282] 0 containers: []
	W1006 14:31:18.814775  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:18.814783  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:18.814793  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:18.874634  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:18.867140   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.867734   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869314   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869766   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.871291   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:18.867140   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.867734   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869314   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.869766   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:18.871291   12017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:18.874645  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:18.874658  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:18.934741  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:18.934765  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:18.964835  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:18.964857  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:19.034348  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:19.034372  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:21.549097  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:21.560431  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:21.560497  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:21.588270  656123 cri.go:89] found id: ""
	I1006 14:31:21.588285  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.588292  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:21.588297  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:21.588352  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:21.615501  656123 cri.go:89] found id: ""
	I1006 14:31:21.615519  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.615527  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:21.615532  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:21.615590  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:21.643122  656123 cri.go:89] found id: ""
	I1006 14:31:21.643143  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.643150  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:21.643154  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:21.643222  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:21.670611  656123 cri.go:89] found id: ""
	I1006 14:31:21.670628  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.670635  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:21.670642  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:21.670705  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:21.698443  656123 cri.go:89] found id: ""
	I1006 14:31:21.698460  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.698467  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:21.698472  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:21.698521  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:21.726957  656123 cri.go:89] found id: ""
	I1006 14:31:21.726973  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.726981  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:21.726986  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:21.727032  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:21.754606  656123 cri.go:89] found id: ""
	I1006 14:31:21.754628  656123 logs.go:282] 0 containers: []
	W1006 14:31:21.754638  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:21.754648  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:21.754661  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:21.814709  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:21.814731  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:21.846526  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:21.846543  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:21.915125  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:21.915156  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:21.929444  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:21.929482  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:21.988239  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:21.980740   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.981329   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.982927   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.983357   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.984775   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:21.980740   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.981329   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.982927   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.983357   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:21.984775   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:24.489339  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:24.500246  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:24.500303  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:24.527224  656123 cri.go:89] found id: ""
	I1006 14:31:24.527243  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.527253  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:24.527258  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:24.527309  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:24.552540  656123 cri.go:89] found id: ""
	I1006 14:31:24.552559  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.552567  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:24.552573  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:24.552636  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:24.581110  656123 cri.go:89] found id: ""
	I1006 14:31:24.581125  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.581131  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:24.581138  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:24.581201  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:24.607563  656123 cri.go:89] found id: ""
	I1006 14:31:24.607580  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.607588  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:24.607592  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:24.607649  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:24.633221  656123 cri.go:89] found id: ""
	I1006 14:31:24.633241  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.633249  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:24.633255  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:24.633303  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:24.658521  656123 cri.go:89] found id: ""
	I1006 14:31:24.658540  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.658547  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:24.658552  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:24.658611  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:24.684336  656123 cri.go:89] found id: ""
	I1006 14:31:24.684351  656123 logs.go:282] 0 containers: []
	W1006 14:31:24.684358  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:24.684367  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:24.684381  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:24.743258  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:24.735488   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.735921   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.737653   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.738173   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.739491   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:24.735488   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.735921   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.737653   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.738173   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:24.739491   12275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:24.743270  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:24.743283  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:24.802373  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:24.802398  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:24.832699  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:24.832716  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:24.898746  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:24.898768  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:27.413617  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:27.424393  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:27.424454  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:27.452153  656123 cri.go:89] found id: ""
	I1006 14:31:27.452173  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.452181  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:27.452186  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:27.452268  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:27.477797  656123 cri.go:89] found id: ""
	I1006 14:31:27.477815  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.477822  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:27.477827  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:27.477881  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:27.502952  656123 cri.go:89] found id: ""
	I1006 14:31:27.502971  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.502978  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:27.502983  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:27.503039  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:27.529416  656123 cri.go:89] found id: ""
	I1006 14:31:27.529433  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.529440  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:27.529444  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:27.529504  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:27.554632  656123 cri.go:89] found id: ""
	I1006 14:31:27.554651  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.554659  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:27.554664  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:27.554713  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:27.580924  656123 cri.go:89] found id: ""
	I1006 14:31:27.580942  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.580948  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:27.580954  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:27.581007  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:27.605807  656123 cri.go:89] found id: ""
	I1006 14:31:27.605826  656123 logs.go:282] 0 containers: []
	W1006 14:31:27.605836  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:27.605846  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:27.605860  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:27.618904  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:27.618922  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:27.677305  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:27.669937   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.670557   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672091   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672543   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.673638   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:27.669937   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.670557   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672091   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.672543   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:27.673638   12394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:27.677315  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:27.677326  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:27.739103  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:27.739125  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:27.767028  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:27.767049  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:30.336333  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:30.348665  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:30.348724  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:30.377945  656123 cri.go:89] found id: ""
	I1006 14:31:30.377963  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.377973  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:30.377979  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:30.378035  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:30.406369  656123 cri.go:89] found id: ""
	I1006 14:31:30.406391  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.406400  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:30.406407  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:30.406484  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:30.435610  656123 cri.go:89] found id: ""
	I1006 14:31:30.435634  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.435644  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:30.435650  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:30.435715  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:30.464182  656123 cri.go:89] found id: ""
	I1006 14:31:30.464201  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.464222  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:30.464230  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:30.464285  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:30.493191  656123 cri.go:89] found id: ""
	I1006 14:31:30.493237  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.493254  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:30.493260  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:30.493313  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:30.522664  656123 cri.go:89] found id: ""
	I1006 14:31:30.522684  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.522695  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:30.522702  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:30.522762  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:30.553858  656123 cri.go:89] found id: ""
	I1006 14:31:30.553874  656123 logs.go:282] 0 containers: []
	W1006 14:31:30.553880  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:30.553891  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:30.553905  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:30.625537  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:30.625563  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:30.641100  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:30.641127  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:30.705527  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:30.696933   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.697691   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699345   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699934   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.701560   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:30.696933   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.697691   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699345   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.699934   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:30.701560   12514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:30.705543  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:30.705560  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:30.768236  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:30.768263  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:33.302531  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:33.314251  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:33.314308  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:33.343374  656123 cri.go:89] found id: ""
	I1006 14:31:33.343394  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.343404  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:33.343411  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:33.343491  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:33.369870  656123 cri.go:89] found id: ""
	I1006 14:31:33.369885  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.369891  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:33.369895  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:33.369944  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:33.394611  656123 cri.go:89] found id: ""
	I1006 14:31:33.394631  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.394640  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:33.394647  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:33.394696  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:33.420323  656123 cri.go:89] found id: ""
	I1006 14:31:33.420338  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.420345  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:33.420350  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:33.420399  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:33.446454  656123 cri.go:89] found id: ""
	I1006 14:31:33.446483  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.446493  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:33.446501  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:33.446557  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:33.471998  656123 cri.go:89] found id: ""
	I1006 14:31:33.472013  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.472019  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:33.472025  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:33.472073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:33.498038  656123 cri.go:89] found id: ""
	I1006 14:31:33.498052  656123 logs.go:282] 0 containers: []
	W1006 14:31:33.498058  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:33.498067  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:33.498077  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:33.554956  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:33.547323   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.547831   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549458   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549938   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.551501   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:33.547323   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.547831   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549458   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.549938   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:33.551501   12635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:33.554967  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:33.554978  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:33.617723  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:33.617747  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:33.647466  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:33.647482  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:33.718107  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:33.718128  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:36.233955  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:36.245297  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:36.245362  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:36.272483  656123 cri.go:89] found id: ""
	I1006 14:31:36.272502  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.272509  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:36.272515  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:36.272574  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:36.299177  656123 cri.go:89] found id: ""
	I1006 14:31:36.299192  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.299199  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:36.299229  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:36.299284  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:36.325899  656123 cri.go:89] found id: ""
	I1006 14:31:36.325920  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.325938  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:36.325946  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:36.326000  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:36.353043  656123 cri.go:89] found id: ""
	I1006 14:31:36.353059  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.353065  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:36.353070  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:36.353117  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:36.379229  656123 cri.go:89] found id: ""
	I1006 14:31:36.379249  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.379259  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:36.379263  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:36.379320  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:36.407572  656123 cri.go:89] found id: ""
	I1006 14:31:36.407589  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.407596  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:36.407601  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:36.407651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:36.435005  656123 cri.go:89] found id: ""
	I1006 14:31:36.435022  656123 logs.go:282] 0 containers: []
	W1006 14:31:36.435028  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:36.435036  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:36.435047  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:36.512293  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:36.512319  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:36.526942  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:36.526966  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:36.587325  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:36.579436   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.579991   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.581727   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.582244   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.583796   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:36.579436   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.579991   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.581727   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.582244   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:36.583796   12771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:36.587336  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:36.587349  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:36.648638  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:36.648672  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:39.181798  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:39.193122  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:39.193188  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:39.221286  656123 cri.go:89] found id: ""
	I1006 14:31:39.221304  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.221312  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:39.221317  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:39.221376  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:39.248422  656123 cri.go:89] found id: ""
	I1006 14:31:39.248437  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.248445  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:39.248450  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:39.248497  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:39.277291  656123 cri.go:89] found id: ""
	I1006 14:31:39.277308  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.277316  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:39.277322  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:39.277390  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:39.303982  656123 cri.go:89] found id: ""
	I1006 14:31:39.303999  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.304005  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:39.304011  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:39.304062  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:39.330654  656123 cri.go:89] found id: ""
	I1006 14:31:39.330674  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.330681  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:39.330686  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:39.330732  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:39.357141  656123 cri.go:89] found id: ""
	I1006 14:31:39.357156  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.357163  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:39.357168  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:39.357241  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:39.383968  656123 cri.go:89] found id: ""
	I1006 14:31:39.383986  656123 logs.go:282] 0 containers: []
	W1006 14:31:39.383993  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:39.384002  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:39.384014  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:39.451579  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:39.451604  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:39.465454  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:39.465475  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:39.523259  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:39.515550   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.516185   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.517720   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.518181   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.519823   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:39.515550   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.516185   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.517720   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.518181   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:39.519823   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:39.523273  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:39.523285  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:39.585241  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:39.585265  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:42.115015  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:42.126583  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:42.126634  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:42.153385  656123 cri.go:89] found id: ""
	I1006 14:31:42.153406  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.153416  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:42.153422  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:42.153479  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:42.181021  656123 cri.go:89] found id: ""
	I1006 14:31:42.181039  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.181049  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:42.181055  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:42.181116  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:42.208104  656123 cri.go:89] found id: ""
	I1006 14:31:42.208123  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.208133  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:42.208139  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:42.208190  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:42.235099  656123 cri.go:89] found id: ""
	I1006 14:31:42.235115  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.235123  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:42.235128  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:42.235176  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:42.262052  656123 cri.go:89] found id: ""
	I1006 14:31:42.262072  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.262079  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:42.262084  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:42.262142  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:42.288093  656123 cri.go:89] found id: ""
	I1006 14:31:42.288111  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.288119  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:42.288124  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:42.288179  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:42.314049  656123 cri.go:89] found id: ""
	I1006 14:31:42.314068  656123 logs.go:282] 0 containers: []
	W1006 14:31:42.314076  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:42.314087  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:42.314100  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:42.379866  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:42.379892  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:42.393937  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:42.393965  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:42.452376  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:42.444669   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.445228   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.446633   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.447200   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.448583   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:42.444669   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.445228   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.446633   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.447200   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:42.448583   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:42.452388  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:42.452400  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:42.513323  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:42.513346  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:45.045836  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:45.056587  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:45.056634  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:45.082895  656123 cri.go:89] found id: ""
	I1006 14:31:45.082913  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.082922  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:45.082930  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:45.082981  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:45.109560  656123 cri.go:89] found id: ""
	I1006 14:31:45.109579  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.109589  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:45.109595  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:45.109651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:45.136033  656123 cri.go:89] found id: ""
	I1006 14:31:45.136055  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.136065  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:45.136072  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:45.136145  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:45.162396  656123 cri.go:89] found id: ""
	I1006 14:31:45.162416  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.162423  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:45.162427  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:45.162493  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:45.188063  656123 cri.go:89] found id: ""
	I1006 14:31:45.188077  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.188084  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:45.188090  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:45.188135  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:45.214119  656123 cri.go:89] found id: ""
	I1006 14:31:45.214140  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.214150  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:45.214157  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:45.214234  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:45.242147  656123 cri.go:89] found id: ""
	I1006 14:31:45.242166  656123 logs.go:282] 0 containers: []
	W1006 14:31:45.242176  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:45.242187  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:45.242201  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:45.311929  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:45.311952  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:45.324994  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:45.325015  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:45.381458  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:45.373267   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374021   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374992   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.376701   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.377102   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:45.373267   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374021   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.374992   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.376701   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:45.377102   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:45.381470  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:45.381483  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:45.445634  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:45.445652  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:47.975088  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:47.986084  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:47.986144  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:48.013186  656123 cri.go:89] found id: ""
	I1006 14:31:48.013218  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.013229  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:48.013235  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:48.013289  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:48.039286  656123 cri.go:89] found id: ""
	I1006 14:31:48.039301  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.039308  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:48.039313  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:48.039361  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:48.065798  656123 cri.go:89] found id: ""
	I1006 14:31:48.065813  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.065821  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:48.065826  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:48.065873  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:48.091102  656123 cri.go:89] found id: ""
	I1006 14:31:48.091119  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.091128  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:48.091133  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:48.091188  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:48.117766  656123 cri.go:89] found id: ""
	I1006 14:31:48.117783  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.117790  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:48.117795  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:48.117844  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:48.144583  656123 cri.go:89] found id: ""
	I1006 14:31:48.144598  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.144604  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:48.144609  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:48.144655  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:48.171397  656123 cri.go:89] found id: ""
	I1006 14:31:48.171413  656123 logs.go:282] 0 containers: []
	W1006 14:31:48.171421  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:48.171429  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:48.171440  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:48.232721  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:48.232743  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:48.262521  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:48.262540  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:48.332831  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:48.332851  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:48.346228  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:48.346247  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:48.402332  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:48.395067   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.395636   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397181   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397582   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.399142   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:48.395067   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.395636   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397181   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.397582   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:48.399142   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:50.903091  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:50.914581  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:50.914643  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:50.940118  656123 cri.go:89] found id: ""
	I1006 14:31:50.940134  656123 logs.go:282] 0 containers: []
	W1006 14:31:50.940144  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:50.940152  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:50.940244  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:50.967927  656123 cri.go:89] found id: ""
	I1006 14:31:50.967942  656123 logs.go:282] 0 containers: []
	W1006 14:31:50.967950  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:50.967955  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:50.968012  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:50.994911  656123 cri.go:89] found id: ""
	I1006 14:31:50.994926  656123 logs.go:282] 0 containers: []
	W1006 14:31:50.994933  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:50.994938  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:50.994983  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:51.021349  656123 cri.go:89] found id: ""
	I1006 14:31:51.021367  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.021376  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:51.021381  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:51.021450  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:51.047856  656123 cri.go:89] found id: ""
	I1006 14:31:51.047875  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.047885  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:51.047892  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:51.047953  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:51.074984  656123 cri.go:89] found id: ""
	I1006 14:31:51.075002  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.075009  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:51.075014  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:51.075076  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:51.102644  656123 cri.go:89] found id: ""
	I1006 14:31:51.102660  656123 logs.go:282] 0 containers: []
	W1006 14:31:51.102668  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:51.102677  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:51.102692  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:51.164842  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:51.164869  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:51.194272  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:51.194293  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:51.264785  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:51.264809  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:51.279283  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:51.279311  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:51.337565  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:51.329770   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.330346   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.331936   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.332399   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.334039   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:51.329770   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.330346   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.331936   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.332399   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:51.334039   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:53.839279  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:53.850387  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:53.850446  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:53.878099  656123 cri.go:89] found id: ""
	I1006 14:31:53.878125  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.878135  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:53.878142  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:53.878199  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:53.905974  656123 cri.go:89] found id: ""
	I1006 14:31:53.905994  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.906004  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:53.906011  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:53.906073  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:53.934338  656123 cri.go:89] found id: ""
	I1006 14:31:53.934355  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.934362  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:53.934367  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:53.934417  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:53.961409  656123 cri.go:89] found id: ""
	I1006 14:31:53.961428  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.961436  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:53.961442  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:53.961492  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:53.988451  656123 cri.go:89] found id: ""
	I1006 14:31:53.988468  656123 logs.go:282] 0 containers: []
	W1006 14:31:53.988475  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:53.988481  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:53.988541  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:54.015683  656123 cri.go:89] found id: ""
	I1006 14:31:54.015703  656123 logs.go:282] 0 containers: []
	W1006 14:31:54.015712  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:54.015718  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:54.015769  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:54.043179  656123 cri.go:89] found id: ""
	I1006 14:31:54.043196  656123 logs.go:282] 0 containers: []
	W1006 14:31:54.043215  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:54.043226  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:54.043242  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:54.107582  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:54.107606  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:54.138057  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:54.138078  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:54.204366  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:54.204394  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:54.218513  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:54.218535  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:54.279164  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:54.271489   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.272091   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.273620   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.274071   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.275622   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:54.271489   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.272091   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.273620   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.274071   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:54.275622   13525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:56.780360  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:56.791915  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:56.791969  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:56.817452  656123 cri.go:89] found id: ""
	I1006 14:31:56.817470  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.817477  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:56.817483  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:56.817529  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:56.842632  656123 cri.go:89] found id: ""
	I1006 14:31:56.842646  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.842653  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:56.842657  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:56.842700  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:56.870346  656123 cri.go:89] found id: ""
	I1006 14:31:56.870361  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.870368  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:56.870373  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:56.870421  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:56.898085  656123 cri.go:89] found id: ""
	I1006 14:31:56.898102  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.898107  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:56.898112  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:56.898172  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:56.925826  656123 cri.go:89] found id: ""
	I1006 14:31:56.925842  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.925849  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:56.925854  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:56.925917  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:56.952736  656123 cri.go:89] found id: ""
	I1006 14:31:56.952753  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.952759  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:56.952764  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:56.952817  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:56.981505  656123 cri.go:89] found id: ""
	I1006 14:31:56.981524  656123 logs.go:282] 0 containers: []
	W1006 14:31:56.981534  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:56.981544  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:56.981558  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:57.038974  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:57.031730   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.032302   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.033897   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.034349   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.035558   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:57.031730   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.032302   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.033897   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.034349   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:57.035558   13621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:57.038998  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:57.039009  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:31:57.104175  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:31:57.104199  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:31:57.133096  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:31:57.133118  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:31:57.198894  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:31:57.198924  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:31:59.714028  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:31:59.725916  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:31:59.725972  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:31:59.751782  656123 cri.go:89] found id: ""
	I1006 14:31:59.751801  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.751810  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:31:59.751816  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:31:59.751864  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:31:59.776851  656123 cri.go:89] found id: ""
	I1006 14:31:59.776867  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.776874  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:31:59.776878  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:31:59.776924  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:31:59.800431  656123 cri.go:89] found id: ""
	I1006 14:31:59.800447  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.800455  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:31:59.800467  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:31:59.800530  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:31:59.825387  656123 cri.go:89] found id: ""
	I1006 14:31:59.825404  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.825412  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:31:59.825423  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:31:59.825468  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:31:59.849169  656123 cri.go:89] found id: ""
	I1006 14:31:59.849186  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.849195  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:31:59.849232  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:31:59.849291  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:31:59.874820  656123 cri.go:89] found id: ""
	I1006 14:31:59.874835  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.874841  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:31:59.874846  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:31:59.874893  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:31:59.900818  656123 cri.go:89] found id: ""
	I1006 14:31:59.900834  656123 logs.go:282] 0 containers: []
	W1006 14:31:59.900840  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:31:59.900848  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:31:59.900860  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:31:59.957989  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:31:59.950533   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.951047   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.952664   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.953012   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.954540   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:31:59.950533   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.951047   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.952664   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.953012   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:31:59.954540   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:31:59.958004  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:31:59.958025  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:00.016244  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:00.016287  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:00.047330  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:00.047346  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:00.111078  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:00.111104  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:02.626253  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:02.637551  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:02.637606  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:02.665023  656123 cri.go:89] found id: ""
	I1006 14:32:02.665040  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.665050  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:02.665056  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:02.665118  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:02.692374  656123 cri.go:89] found id: ""
	I1006 14:32:02.692397  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.692404  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:02.692409  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:02.692458  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:02.719922  656123 cri.go:89] found id: ""
	I1006 14:32:02.719942  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.719953  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:02.719959  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:02.720014  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:02.746934  656123 cri.go:89] found id: ""
	I1006 14:32:02.746950  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.746956  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:02.746962  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:02.747009  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:02.774417  656123 cri.go:89] found id: ""
	I1006 14:32:02.774435  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.774442  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:02.774447  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:02.774496  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:02.801761  656123 cri.go:89] found id: ""
	I1006 14:32:02.801776  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.801783  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:02.801788  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:02.801844  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:02.828981  656123 cri.go:89] found id: ""
	I1006 14:32:02.828998  656123 logs.go:282] 0 containers: []
	W1006 14:32:02.829005  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:02.829014  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:02.829028  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:02.895754  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:02.895778  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:02.909930  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:02.909950  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:02.968533  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:02.961042   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.961577   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963104   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963565   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.965085   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:02.961042   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.961577   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963104   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.963565   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:02.965085   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:02.968546  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:02.968560  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:03.033943  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:03.033967  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:05.566153  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:05.577534  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:05.577601  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:05.604282  656123 cri.go:89] found id: ""
	I1006 14:32:05.604301  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.604311  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:05.604317  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:05.604375  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:05.631089  656123 cri.go:89] found id: ""
	I1006 14:32:05.631105  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.631112  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:05.631116  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:05.631172  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:05.658464  656123 cri.go:89] found id: ""
	I1006 14:32:05.658484  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.658495  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:05.658501  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:05.658559  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:05.685951  656123 cri.go:89] found id: ""
	I1006 14:32:05.685971  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.685980  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:05.685987  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:05.686043  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:05.712003  656123 cri.go:89] found id: ""
	I1006 14:32:05.712020  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.712028  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:05.712033  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:05.712093  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:05.740632  656123 cri.go:89] found id: ""
	I1006 14:32:05.740652  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.740660  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:05.740667  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:05.740728  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:05.766042  656123 cri.go:89] found id: ""
	I1006 14:32:05.766064  656123 logs.go:282] 0 containers: []
	W1006 14:32:05.766072  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:05.766080  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:05.766092  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:05.837102  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:05.837132  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:05.851014  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:05.851038  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:05.910902  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:05.903038   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.903650   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905294   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905834   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.907440   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:05.903038   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.903650   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905294   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.905834   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:05.907440   14001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:05.910914  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:05.910927  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:05.975171  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:05.975197  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:08.507407  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:08.518312  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:08.518362  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:08.544556  656123 cri.go:89] found id: ""
	I1006 14:32:08.544575  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.544585  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:08.544591  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:08.544646  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:08.569832  656123 cri.go:89] found id: ""
	I1006 14:32:08.569849  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.569858  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:08.569863  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:08.569911  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:08.595352  656123 cri.go:89] found id: ""
	I1006 14:32:08.595368  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.595377  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:08.595383  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:08.595447  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:08.621980  656123 cri.go:89] found id: ""
	I1006 14:32:08.621995  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.622001  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:08.622006  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:08.622062  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:08.648436  656123 cri.go:89] found id: ""
	I1006 14:32:08.648451  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.648458  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:08.648462  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:08.648519  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:08.673561  656123 cri.go:89] found id: ""
	I1006 14:32:08.673579  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.673589  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:08.673595  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:08.673657  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:08.699829  656123 cri.go:89] found id: ""
	I1006 14:32:08.699847  656123 logs.go:282] 0 containers: []
	W1006 14:32:08.699855  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:08.699866  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:08.699884  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:08.712951  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:08.712972  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:08.769035  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:08.761477   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.762001   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.763631   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.764099   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.765640   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:08.761477   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.762001   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.763631   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.764099   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:08.765640   14117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:08.769047  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:08.769063  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:08.832511  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:08.832534  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:08.861346  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:08.861364  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:11.430582  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:11.441872  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:32:11.441923  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:32:11.467567  656123 cri.go:89] found id: ""
	I1006 14:32:11.467586  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.467596  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:32:11.467603  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:32:11.467660  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:32:11.494656  656123 cri.go:89] found id: ""
	I1006 14:32:11.494683  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.494690  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:32:11.494695  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:32:11.494743  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:32:11.521748  656123 cri.go:89] found id: ""
	I1006 14:32:11.521763  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.521770  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:32:11.521775  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:32:11.521820  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:32:11.548602  656123 cri.go:89] found id: ""
	I1006 14:32:11.548620  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.548626  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:32:11.548632  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:32:11.548691  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:32:11.576572  656123 cri.go:89] found id: ""
	I1006 14:32:11.576588  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.576595  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:32:11.576600  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:32:11.576651  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:32:11.603326  656123 cri.go:89] found id: ""
	I1006 14:32:11.603346  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.603355  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:32:11.603360  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:32:11.603415  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:32:11.629710  656123 cri.go:89] found id: ""
	I1006 14:32:11.629728  656123 logs.go:282] 0 containers: []
	W1006 14:32:11.629738  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:32:11.629747  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:32:11.629757  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:32:11.700650  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:32:11.700726  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:32:11.714603  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:32:11.714630  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:32:11.772602  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:32:11.764966   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.765455   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767171   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767660   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.769186   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:32:11.764966   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.765455   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767171   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.767660   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:32:11.769186   14244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:32:11.772614  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:32:11.772626  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:32:11.833230  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:32:11.833254  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:32:14.365875  656123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:32:14.376698  656123 kubeadm.go:601] duration metric: took 4m4.218544485s to restartPrimaryControlPlane
	W1006 14:32:14.376820  656123 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1006 14:32:14.376904  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:32:14.835776  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:32:14.848804  656123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:32:14.857253  656123 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:32:14.857310  656123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:32:14.864786  656123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:32:14.864795  656123 kubeadm.go:157] found existing configuration files:
	
	I1006 14:32:14.864835  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:32:14.872239  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:32:14.872285  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:32:14.879414  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:32:14.886697  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:32:14.886741  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:32:14.893638  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:32:14.900861  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:32:14.900895  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:32:14.907789  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:32:14.914902  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:32:14.914933  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:32:14.921800  656123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:32:14.978601  656123 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:32:15.038549  656123 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:36:17.406896  656123 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:36:17.407019  656123 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:36:17.410627  656123 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:36:17.410683  656123 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:36:17.410779  656123 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:36:17.410840  656123 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:36:17.410869  656123 kubeadm.go:318] OS: Linux
	I1006 14:36:17.410914  656123 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:36:17.410949  656123 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:36:17.411007  656123 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:36:17.411060  656123 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:36:17.411098  656123 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:36:17.411140  656123 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:36:17.411189  656123 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:36:17.411245  656123 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:36:17.411317  656123 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:36:17.411401  656123 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:36:17.411485  656123 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:36:17.411556  656123 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:36:17.413722  656123 out.go:252]   - Generating certificates and keys ...
	I1006 14:36:17.413795  656123 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:36:17.413884  656123 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:36:17.413987  656123 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:36:17.414057  656123 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:36:17.414137  656123 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:36:17.414181  656123 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:36:17.414260  656123 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:36:17.414334  656123 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:36:17.414439  656123 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:36:17.414518  656123 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:36:17.414578  656123 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:36:17.414662  656123 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:36:17.414728  656123 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:36:17.414803  656123 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:36:17.414845  656123 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:36:17.414916  656123 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:36:17.414967  656123 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:36:17.415028  656123 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:36:17.415104  656123 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:36:17.416892  656123 out.go:252]   - Booting up control plane ...
	I1006 14:36:17.416963  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:36:17.417045  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:36:17.417099  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:36:17.417195  656123 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:36:17.417298  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:36:17.417388  656123 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:36:17.417462  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:36:17.417493  656123 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:36:17.417595  656123 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:36:17.417679  656123 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:36:17.417755  656123 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.528699ms
	I1006 14:36:17.417834  656123 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:36:17.417918  656123 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1006 14:36:17.418000  656123 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:36:17.418064  656123 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:36:17.418126  656123 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000416419s
	I1006 14:36:17.418196  656123 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000737625s
	I1006 14:36:17.418279  656123 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00070414s
	I1006 14:36:17.418282  656123 kubeadm.go:318] 
	I1006 14:36:17.418350  656123 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:36:17.418415  656123 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:36:17.418514  656123 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:36:17.418595  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:36:17.418668  656123 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:36:17.418749  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:36:17.418809  656123 kubeadm.go:318] 
	W1006 14:36:17.418920  656123 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.528699ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000416419s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000737625s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00070414s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:36:17.419037  656123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:36:17.865331  656123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:36:17.878364  656123 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:36:17.878407  656123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:36:17.886488  656123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:36:17.886495  656123 kubeadm.go:157] found existing configuration files:
	
	I1006 14:36:17.886535  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:36:17.894142  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:36:17.894180  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:36:17.901791  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:36:17.909427  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:36:17.909474  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:36:17.916720  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:36:17.924474  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:36:17.924517  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:36:17.931765  656123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:36:17.939342  656123 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:36:17.939397  656123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:36:17.947232  656123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:36:17.986103  656123 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:36:17.986155  656123 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:36:18.005746  656123 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:36:18.005847  656123 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:36:18.005884  656123 kubeadm.go:318] OS: Linux
	I1006 14:36:18.005928  656123 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:36:18.005966  656123 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:36:18.006009  656123 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:36:18.006047  656123 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:36:18.006115  656123 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:36:18.006229  656123 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:36:18.006274  656123 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:36:18.006314  656123 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:36:18.063701  656123 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:36:18.063828  656123 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:36:18.063979  656123 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:36:18.070276  656123 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:36:18.073073  656123 out.go:252]   - Generating certificates and keys ...
	I1006 14:36:18.073146  656123 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:36:18.073230  656123 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:36:18.073310  656123 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:36:18.073360  656123 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:36:18.073469  656123 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:36:18.073537  656123 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:36:18.073593  656123 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:36:18.073643  656123 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:36:18.073731  656123 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:36:18.073828  656123 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:36:18.073881  656123 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:36:18.073950  656123 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:36:18.358369  656123 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:36:18.660416  656123 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:36:18.904822  656123 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:36:19.181972  656123 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:36:19.419333  656123 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:36:19.419883  656123 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:36:19.422018  656123 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:36:19.424552  656123 out.go:252]   - Booting up control plane ...
	I1006 14:36:19.424633  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:36:19.424695  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:36:19.424766  656123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:36:19.438773  656123 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:36:19.438935  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:36:19.446167  656123 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:36:19.446370  656123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:36:19.446407  656123 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:36:19.549636  656123 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:36:19.549773  656123 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:36:21.051643  656123 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501975645s
	I1006 14:36:21.055540  656123 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:36:21.055642  656123 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1006 14:36:21.055761  656123 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:36:21.055838  656123 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:40:21.055953  656123 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	I1006 14:40:21.056046  656123 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	I1006 14:40:21.056101  656123 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	I1006 14:40:21.056104  656123 kubeadm.go:318] 
	I1006 14:40:21.056173  656123 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:40:21.056304  656123 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:40:21.056432  656123 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:40:21.056532  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:40:21.056641  656123 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:40:21.056764  656123 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:40:21.056770  656123 kubeadm.go:318] 
	I1006 14:40:21.060023  656123 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:40:21.060145  656123 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:40:21.060722  656123 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	I1006 14:40:21.060819  656123 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:40:21.060909  656123 kubeadm.go:402] duration metric: took 12m10.94114452s to StartCluster
	I1006 14:40:21.060976  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:40:21.061036  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:40:21.089107  656123 cri.go:89] found id: ""
	I1006 14:40:21.089130  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.089137  656123 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:40:21.089143  656123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:40:21.089218  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:40:21.116923  656123 cri.go:89] found id: ""
	I1006 14:40:21.116942  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.116948  656123 logs.go:284] No container was found matching "etcd"
	I1006 14:40:21.116954  656123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:40:21.117001  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:40:21.144161  656123 cri.go:89] found id: ""
	I1006 14:40:21.144196  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.144219  656123 logs.go:284] No container was found matching "coredns"
	I1006 14:40:21.144227  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:40:21.144287  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:40:21.173031  656123 cri.go:89] found id: ""
	I1006 14:40:21.173051  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.173059  656123 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:40:21.173065  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:40:21.173117  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:40:21.200194  656123 cri.go:89] found id: ""
	I1006 14:40:21.200232  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.200242  656123 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:40:21.200249  656123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:40:21.200313  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:40:21.227692  656123 cri.go:89] found id: ""
	I1006 14:40:21.227708  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.227715  656123 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:40:21.227720  656123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:40:21.227777  656123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:40:21.255803  656123 cri.go:89] found id: ""
	I1006 14:40:21.255827  656123 logs.go:282] 0 containers: []
	W1006 14:40:21.255836  656123 logs.go:284] No container was found matching "kindnet"
	I1006 14:40:21.255848  656123 logs.go:123] Gathering logs for dmesg ...
	I1006 14:40:21.255863  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:40:21.269683  656123 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:40:21.269708  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:40:21.330259  656123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:40:21.322987   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.323612   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.324719   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.325098   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.326635   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:40:21.322987   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.323612   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.324719   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.325098   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:21.326635   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 14:40:21.330282  656123 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:40:21.330295  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:40:21.395010  656123 logs.go:123] Gathering logs for container status ...
	I1006 14:40:21.395036  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:40:21.425956  656123 logs.go:123] Gathering logs for kubelet ...
	I1006 14:40:21.425975  656123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1006 14:40:21.494244  656123 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:40:21.494316  656123 out.go:285] * 
	W1006 14:40:21.494402  656123 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:40:21.494415  656123 out.go:285] * 
	W1006 14:40:21.496145  656123 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:40:21.499891  656123 out.go:203] 
	W1006 14:40:21.500973  656123 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501975645s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000134857s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00022136s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000206831s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:40:21.500999  656123 out.go:285] * 
	I1006 14:40:21.502231  656123 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:40:27 functional-135520 crio[5849]: time="2025-10-06T14:40:27.0047778Z" level=info msg="createCtr: removing container 83805b8561854dd9d34a98d7ff37a1e4d1cc2c233b6304e0896f95705beb330f" id=6c57590f-78c7-40be-84c8-12e3d366b5cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:27 functional-135520 crio[5849]: time="2025-10-06T14:40:27.004812244Z" level=info msg="createCtr: deleting container 83805b8561854dd9d34a98d7ff37a1e4d1cc2c233b6304e0896f95705beb330f from storage" id=6c57590f-78c7-40be-84c8-12e3d366b5cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:27 functional-135520 crio[5849]: time="2025-10-06T14:40:27.006886662Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-135520_kube-system_09d686e340c6809af92c3f18dc65ef21_0" id=6c57590f-78c7-40be-84c8-12e3d366b5cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:28 functional-135520 crio[5849]: time="2025-10-06T14:40:28.981402868Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=35eff68d-ec2a-4a4f-8a4f-56dd8d64a5b0 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:28 functional-135520 crio[5849]: time="2025-10-06T14:40:28.981729688Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=2a1468bf-8ae9-4a64-90e1-1c81dd0c1b4e name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:28 functional-135520 crio[5849]: time="2025-10-06T14:40:28.98240204Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=6da722e6-a791-420d-8256-16bf9be9aff4 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:28 functional-135520 crio[5849]: time="2025-10-06T14:40:28.982906148Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=78fbff7f-be8e-4b08-9327-760b75126d70 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:28 functional-135520 crio[5849]: time="2025-10-06T14:40:28.983360055Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-135520/kube-apiserver" id=77d8cbc3-4dda-4a2a-8a91-d249218dc820 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:28 functional-135520 crio[5849]: time="2025-10-06T14:40:28.983650039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:28 functional-135520 crio[5849]: time="2025-10-06T14:40:28.984409574Z" level=info msg="Creating container: kube-system/etcd-functional-135520/etcd" id=1441c5bd-90ad-44a1-9442-9782cebd5437 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:28 functional-135520 crio[5849]: time="2025-10-06T14:40:28.984790945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:28 functional-135520 crio[5849]: time="2025-10-06T14:40:28.98900458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:28 functional-135520 crio[5849]: time="2025-10-06T14:40:28.989684839Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:28 functional-135520 crio[5849]: time="2025-10-06T14:40:28.994766269Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:28 functional-135520 crio[5849]: time="2025-10-06T14:40:28.996955533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:29 functional-135520 crio[5849]: time="2025-10-06T14:40:29.015192311Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=77d8cbc3-4dda-4a2a-8a91-d249218dc820 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:29 functional-135520 crio[5849]: time="2025-10-06T14:40:29.016782173Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=1441c5bd-90ad-44a1-9442-9782cebd5437 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:29 functional-135520 crio[5849]: time="2025-10-06T14:40:29.017562341Z" level=info msg="createCtr: deleting container ID 9e9f86079013d54b062e9189e536f5e3a7d8444da798fe4507ef32c2fe2675f2 from idIndex" id=77d8cbc3-4dda-4a2a-8a91-d249218dc820 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:29 functional-135520 crio[5849]: time="2025-10-06T14:40:29.017605922Z" level=info msg="createCtr: removing container 9e9f86079013d54b062e9189e536f5e3a7d8444da798fe4507ef32c2fe2675f2" id=77d8cbc3-4dda-4a2a-8a91-d249218dc820 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:29 functional-135520 crio[5849]: time="2025-10-06T14:40:29.017743804Z" level=info msg="createCtr: deleting container 9e9f86079013d54b062e9189e536f5e3a7d8444da798fe4507ef32c2fe2675f2 from storage" id=77d8cbc3-4dda-4a2a-8a91-d249218dc820 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:29 functional-135520 crio[5849]: time="2025-10-06T14:40:29.0190967Z" level=info msg="createCtr: deleting container ID 9fb034cc84ba421928bae88ed1ae8e37ff52f44b9f2c767be8ae0aa2ab6f7977 from idIndex" id=1441c5bd-90ad-44a1-9442-9782cebd5437 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:29 functional-135520 crio[5849]: time="2025-10-06T14:40:29.019197705Z" level=info msg="createCtr: removing container 9fb034cc84ba421928bae88ed1ae8e37ff52f44b9f2c767be8ae0aa2ab6f7977" id=1441c5bd-90ad-44a1-9442-9782cebd5437 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:29 functional-135520 crio[5849]: time="2025-10-06T14:40:29.019258907Z" level=info msg="createCtr: deleting container 9fb034cc84ba421928bae88ed1ae8e37ff52f44b9f2c767be8ae0aa2ab6f7977 from storage" id=1441c5bd-90ad-44a1-9442-9782cebd5437 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:29 functional-135520 crio[5849]: time="2025-10-06T14:40:29.022975089Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-135520_kube-system_f24ebbe4b3fc964d32e35d345c0d3653_0" id=1441c5bd-90ad-44a1-9442-9782cebd5437 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:29 functional-135520 crio[5849]: time="2025-10-06T14:40:29.023200796Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-135520_kube-system_9c0f460a73b4e4a7087ce2a722c4cad4_0" id=77d8cbc3-4dda-4a2a-8a91-d249218dc820 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:40:29.609830   16478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:29.610529   16478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:29.612458   16478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:29.613003   16478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:29.614750   16478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:40:29 up  5:22,  0 user,  load average: 0.24, 0.09, 0.25
	Linux functional-135520 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:40:26 functional-135520 kubelet[14966]: E1006 14:40:26.020963   14966 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-135520.186beda7023a08f5  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-135520,UID:functional-135520,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-135520 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-135520,},FirstTimestamp:2025-10-06 14:36:20.970989813 +0000 UTC m=+1.419813170,LastTimestamp:2025-10-06 14:36:20.970989813 +0000 UTC m=+1.419813170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-135520,}"
	Oct 06 14:40:26 functional-135520 kubelet[14966]: E1006 14:40:26.980235   14966 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:40:27 functional-135520 kubelet[14966]: E1006 14:40:27.007240   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:40:27 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:27 functional-135520 kubelet[14966]:  > podSandboxID="e06459a5221479b8f8ca8a805df180001ae8c03ad8ebddffca24e6ba8a2614e8"
	Oct 06 14:40:27 functional-135520 kubelet[14966]: E1006 14:40:27.007373   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:27 functional-135520 kubelet[14966]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-135520_kube-system(09d686e340c6809af92c3f18dc65ef21): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:27 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:27 functional-135520 kubelet[14966]: E1006 14:40:27.007415   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-135520" podUID="09d686e340c6809af92c3f18dc65ef21"
	Oct 06 14:40:28 functional-135520 kubelet[14966]: E1006 14:40:28.980848   14966 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:40:28 functional-135520 kubelet[14966]: E1006 14:40:28.981085   14966 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:40:29 functional-135520 kubelet[14966]: E1006 14:40:29.023531   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:40:29 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:29 functional-135520 kubelet[14966]:  > podSandboxID="0bf6050e948f47f363040ce421949b89bef2d06623cc9fef382c27f04872ce86"
	Oct 06 14:40:29 functional-135520 kubelet[14966]: E1006 14:40:29.023549   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:40:29 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:29 functional-135520 kubelet[14966]:  > podSandboxID="91ab0a64f17ca953284929376780a86381ab6a8cae1f4af7da89790dc4c0e8df"
	Oct 06 14:40:29 functional-135520 kubelet[14966]: E1006 14:40:29.023668   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:29 functional-135520 kubelet[14966]:         container kube-apiserver start failed in pod kube-apiserver-functional-135520_kube-system(9c0f460a73b4e4a7087ce2a722c4cad4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:29 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:29 functional-135520 kubelet[14966]: E1006 14:40:29.023801   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-135520" podUID="9c0f460a73b4e4a7087ce2a722c4cad4"
	Oct 06 14:40:29 functional-135520 kubelet[14966]: E1006 14:40:29.023746   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:29 functional-135520 kubelet[14966]:         container etcd start failed in pod etcd-functional-135520_kube-system(f24ebbe4b3fc964d32e35d345c0d3653): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:29 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:29 functional-135520 kubelet[14966]: E1006 14:40:29.024948   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-135520" podUID="f24ebbe4b3fc964d32e35d345c0d3653"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520: exit status 2 (398.974897ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-135520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-135520 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-135520 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (56.425201ms)

                                                
                                                
** stderr ** 
	E1006 14:40:41.115964  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.116340  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.118115  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.118690  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.119647  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-135520 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1006 14:40:41.115964  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.116340  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.118115  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.118690  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.119647  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1006 14:40:41.115964  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.116340  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.118115  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.118690  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.119647  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1006 14:40:41.115964  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.116340  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.118115  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.118690  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.119647  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1006 14:40:41.115964  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.116340  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.118115  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.118690  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.119647  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1006 14:40:41.115964  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.116340  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.118115  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.118690  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:40:41.119647  678803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-135520
helpers_test.go:243: (dbg) docker inspect functional-135520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	        "Created": "2025-10-06T14:13:32.283355011Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:13:32.318096257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/hosts",
	        "LogPath": "/var/lib/docker/containers/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20/3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20-json.log",
	        "Name": "/functional-135520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-135520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-135520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dd9a226ea42760de316c3cd10c240a06a297d856d2072b1bb8c9de31097dd20",
	                "LowerDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc963905026931708302dacddcd89a9d41c6b02cea585cc1ff491aa62dc8d60a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-135520",
	                "Source": "/var/lib/docker/volumes/functional-135520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-135520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-135520",
	                "name.minikube.sigs.k8s.io": "functional-135520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6368ffca3e5840f94a34614c511d9f0a0a4ca0d05de4fe1f94c8bfdc332f1a62",
	            "SandboxKey": "/var/run/docker/netns/6368ffca3e58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-135520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:d1:94:25:38:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f712be59dd18dac98bed5f234c9f77a39e85277143d6f46285adcd3b0185d552",
	                    "EndpointID": "b816964b653b1b5116e3262dfdc87af272931013ef5b9e2714c9ff7357118a6f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-135520",
	                        "3dd9a226ea42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-135520 -n functional-135520: exit status 2 (310.245118ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 logs -n 25
helpers_test.go:260: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdspecific-port2551281271/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ ssh       │ functional-135520 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image     │ functional-135520 image ls                                                                                                        │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image     │ functional-135520 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image     │ functional-135520 image save --daemon kicbase/echo-server:functional-135520 --alsologtostderr                                     │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh       │ functional-135520 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh       │ functional-135520 ssh -- ls -la /mount-9p                                                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh       │ functional-135520 ssh sudo umount -f /mount-9p                                                                                    │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ ssh       │ functional-135520 ssh findmnt -T /mount1                                                                                          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ mount     │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount3 --alsologtostderr -v=1                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ mount     │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount1 --alsologtostderr -v=1                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ mount     │ -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount2 --alsologtostderr -v=1                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ ssh       │ functional-135520 ssh findmnt -T /mount1                                                                                          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh       │ functional-135520 ssh findmnt -T /mount2                                                                                          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh       │ functional-135520 ssh findmnt -T /mount3                                                                                          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ mount     │ -p functional-135520 --kill=true                                                                                                  │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service   │ functional-135520 service list                                                                                                    │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service   │ functional-135520 service list -o json                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service   │ functional-135520 service --namespace=default --https --url hello-node                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service   │ functional-135520 service hello-node --url --format={{.IP}}                                                                       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service   │ functional-135520 service hello-node --url                                                                                        │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ start     │ -p functional-135520 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ start     │ -p functional-135520 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ start     │ -p functional-135520 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                   │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-135520 --alsologtostderr -v=1                                                                    │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:40:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:40:40.232397  678375 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:40:40.232695  678375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:40:40.232706  678375 out.go:374] Setting ErrFile to fd 2...
	I1006 14:40:40.232710  678375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:40:40.232913  678375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:40:40.233416  678375 out.go:368] Setting JSON to false
	I1006 14:40:40.234527  678375 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19376,"bootTime":1759742264,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:40:40.234623  678375 start.go:140] virtualization: kvm guest
	I1006 14:40:40.236341  678375 out.go:179] * [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:40:40.237443  678375 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:40:40.237480  678375 notify.go:220] Checking for updates...
	I1006 14:40:40.239720  678375 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:40:40.240829  678375 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:40:40.241859  678375 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:40:40.242876  678375 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:40:40.243805  678375 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:40:40.245219  678375 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:40:40.245691  678375 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:40:40.271708  678375 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:40:40.271845  678375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:40:40.332594  678375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:40:40.321774938 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:40:40.332758  678375 docker.go:318] overlay module found
	I1006 14:40:40.333962  678375 out.go:179] * Using the docker driver based on existing profile
	I1006 14:40:40.335324  678375 start.go:304] selected driver: docker
	I1006 14:40:40.335338  678375 start.go:924] validating driver "docker" against &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:40:40.335418  678375 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:40:40.335503  678375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:40:40.404152  678375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:40:40.39324905 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:40:40.405093  678375 cni.go:84] Creating CNI manager for ""
	I1006 14:40:40.405186  678375 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 14:40:40.405273  678375 start.go:348] cluster config:
	{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:40:40.407149  678375 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.798335064Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-135520" id=c07c47b5-f123-4df6-aac0-718c9481559f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.798458706Z" level=info msg="Image localhost/kicbase/echo-server:functional-135520 not found" id=c07c47b5-f123-4df6-aac0-718c9481559f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.798490196Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-135520 found" id=c07c47b5-f123-4df6-aac0-718c9481559f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.980963669Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=17b6706e-b500-4524-871f-23df38e70571 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.981925826Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=94f4b8be-c003-4976-9cb9-8a805158b29d name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.982820585Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-135520/kube-scheduler" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.983106395Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.987700403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:33 functional-135520 crio[5849]: time="2025-10-06T14:40:33.988175946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.003670737Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.005132701Z" level=info msg="createCtr: deleting container ID aa3a2f6476915d7b5d9b1bd05a3095d22efa7de7f25df14d6830c1b4bad20c39 from idIndex" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.005171158Z" level=info msg="createCtr: removing container aa3a2f6476915d7b5d9b1bd05a3095d22efa7de7f25df14d6830c1b4bad20c39" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.005225713Z" level=info msg="createCtr: deleting container aa3a2f6476915d7b5d9b1bd05a3095d22efa7de7f25df14d6830c1b4bad20c39 from storage" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:34 functional-135520 crio[5849]: time="2025-10-06T14:40:34.007324024Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-135520_kube-system_5115bd1eba9594a3f2b99b5d6a4b9d59_0" id=af53cacb-5aef-4f09-b7c7-e182743a4512 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:39 functional-135520 crio[5849]: time="2025-10-06T14:40:39.980750641Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=ee4ca7c7-ac83-4870-9ade-fa2df648ae3f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:39 functional-135520 crio[5849]: time="2025-10-06T14:40:39.981808962Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=8b71f330-b482-48e5-bcb5-dc885b414478 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:40:39 functional-135520 crio[5849]: time="2025-10-06T14:40:39.983078192Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-135520/kube-controller-manager" id=76f1f729-27f5-4452-8cb2-354f0a45cfd8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:39 functional-135520 crio[5849]: time="2025-10-06T14:40:39.983452542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:39 functional-135520 crio[5849]: time="2025-10-06T14:40:39.989062942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:39 functional-135520 crio[5849]: time="2025-10-06T14:40:39.991602485Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:40:40 functional-135520 crio[5849]: time="2025-10-06T14:40:40.010584568Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=76f1f729-27f5-4452-8cb2-354f0a45cfd8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:40 functional-135520 crio[5849]: time="2025-10-06T14:40:40.01256866Z" level=info msg="createCtr: deleting container ID 1598bcba6b2dd999bfc0d02c0f68684bd1b8f0cb195f1cd27ebf377fd1f66153 from idIndex" id=76f1f729-27f5-4452-8cb2-354f0a45cfd8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:40 functional-135520 crio[5849]: time="2025-10-06T14:40:40.012620131Z" level=info msg="createCtr: removing container 1598bcba6b2dd999bfc0d02c0f68684bd1b8f0cb195f1cd27ebf377fd1f66153" id=76f1f729-27f5-4452-8cb2-354f0a45cfd8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:40 functional-135520 crio[5849]: time="2025-10-06T14:40:40.012659775Z" level=info msg="createCtr: deleting container 1598bcba6b2dd999bfc0d02c0f68684bd1b8f0cb195f1cd27ebf377fd1f66153 from storage" id=76f1f729-27f5-4452-8cb2-354f0a45cfd8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:40:40 functional-135520 crio[5849]: time="2025-10-06T14:40:40.015113141Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-135520_kube-system_09d686e340c6809af92c3f18dc65ef21_0" id=76f1f729-27f5-4452-8cb2-354f0a45cfd8 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:40:42.044197   18059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:42.044812   18059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:42.046393   18059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:42.046880   18059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1006 14:40:42.048307   18059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:40:42 up  5:22,  0 user,  load average: 1.09, 0.28, 0.31
	Linux functional-135520 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:40:34 functional-135520 kubelet[14966]:         container kube-scheduler start failed in pod kube-scheduler-functional-135520_kube-system(5115bd1eba9594a3f2b99b5d6a4b9d59): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:34 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:34 functional-135520 kubelet[14966]: E1006 14:40:34.007777   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-135520" podUID="5115bd1eba9594a3f2b99b5d6a4b9d59"
	Oct 06 14:40:36 functional-135520 kubelet[14966]: E1006 14:40:36.021610   14966 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-135520.186beda7023a08f5  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-135520,UID:functional-135520,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-135520 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-135520,},FirstTimestamp:2025-10-06 14:36:20.970989813 +0000 UTC m=+1.419813170,LastTimestamp:2025-10-06 14:36:20.970989813 +0000 UTC m=+1.419813170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-135520,}"
	Oct 06 14:40:36 functional-135520 kubelet[14966]: E1006 14:40:36.228685   14966 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 06 14:40:38 functional-135520 kubelet[14966]: E1006 14:40:38.603588   14966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-135520?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 06 14:40:38 functional-135520 kubelet[14966]: I1006 14:40:38.766620   14966 kubelet_node_status.go:75] "Attempting to register node" node="functional-135520"
	Oct 06 14:40:38 functional-135520 kubelet[14966]: E1006 14:40:38.766986   14966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-135520"
	Oct 06 14:40:39 functional-135520 kubelet[14966]: E1006 14:40:39.980242   14966 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:40:40 functional-135520 kubelet[14966]: E1006 14:40:40.015489   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:40:40 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:40 functional-135520 kubelet[14966]:  > podSandboxID="e06459a5221479b8f8ca8a805df180001ae8c03ad8ebddffca24e6ba8a2614e8"
	Oct 06 14:40:40 functional-135520 kubelet[14966]: E1006 14:40:40.015615   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:40 functional-135520 kubelet[14966]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-135520_kube-system(09d686e340c6809af92c3f18dc65ef21): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:40 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:40 functional-135520 kubelet[14966]: E1006 14:40:40.015653   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-135520" podUID="09d686e340c6809af92c3f18dc65ef21"
	Oct 06 14:40:40 functional-135520 kubelet[14966]: E1006 14:40:40.994321   14966 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-135520\" not found"
	Oct 06 14:40:41 functional-135520 kubelet[14966]: E1006 14:40:41.979794   14966 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-135520\" not found" node="functional-135520"
	Oct 06 14:40:42 functional-135520 kubelet[14966]: E1006 14:40:42.006755   14966 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:40:42 functional-135520 kubelet[14966]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:42 functional-135520 kubelet[14966]:  > podSandboxID="0bf6050e948f47f363040ce421949b89bef2d06623cc9fef382c27f04872ce86"
	Oct 06 14:40:42 functional-135520 kubelet[14966]: E1006 14:40:42.006857   14966 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:40:42 functional-135520 kubelet[14966]:         container kube-apiserver start failed in pod kube-apiserver-functional-135520_kube-system(9c0f460a73b4e4a7087ce2a722c4cad4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:40:42 functional-135520 kubelet[14966]:  > logger="UnhandledError"
	Oct 06 14:40:42 functional-135520 kubelet[14966]: E1006 14:40:42.006891   14966 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-135520" podUID="9c0f460a73b4e4a7087ce2a722c4cad4"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520
I1006 14:40:42.172964  629719 retry.go:31] will retry after 10.092214914s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-135520 -n functional-135520: exit status 2 (326.430488ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-135520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image load --daemon kicbase/echo-server:functional-135520 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-135520" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-135520 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-135520 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1006 14:40:30.234639  672768 out.go:360] Setting OutFile to fd 1 ...
I1006 14:40:30.234796  672768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:30.234803  672768 out.go:374] Setting ErrFile to fd 2...
I1006 14:40:30.234808  672768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:30.235135  672768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
I1006 14:40:30.235548  672768 mustload.go:65] Loading cluster: functional-135520
I1006 14:40:30.236107  672768 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:40:30.236660  672768 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
I1006 14:40:30.260163  672768 host.go:66] Checking if "functional-135520" exists ...
I1006 14:40:30.261114  672768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1006 14:40:30.350628  672768 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:40:30.338382173 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1006 14:40:30.350839  672768 api_server.go:166] Checking apiserver status ...
I1006 14:40:30.350884  672768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1006 14:40:30.350920  672768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
I1006 14:40:30.373618  672768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
W1006 14:40:30.491068  672768 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1006 14:40:30.492961  672768 out.go:179] * The control-plane node functional-135520 apiserver is not running: (state=Stopped)
I1006 14:40:30.494258  672768 out.go:179]   To start a cluster, run: "minikube start -p functional-135520"

                                                
                                                
stdout: * The control-plane node functional-135520 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-135520"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-135520 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-135520 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-135520 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-135520 tunnel --alsologtostderr] ...
helpers_test.go:519: unable to terminate pid 672767: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-135520 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-135520 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image load --daemon kicbase/echo-server:functional-135520 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-135520" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-135520 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-135520 apply -f testdata/testsvc.yaml: exit status 1 (68.627105ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-135520 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (92.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1006 14:40:30.581554  629719 retry.go:31] will retry after 1.938883904s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-135520 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-135520 get svc nginx-svc: exit status 1 (50.021302ms)

                                                
                                                
** stderr ** 
	E1006 14:42:02.827417  681194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:42:02.827725  681194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:42:02.829149  681194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:42:02.829450  681194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1006 14:42:02.830884  681194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-135520 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (92.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-135520 /tmp/TestFunctionalparallelMountCmdany-port2160266487/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759761631098316341" to /tmp/TestFunctionalparallelMountCmdany-port2160266487/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759761631098316341" to /tmp/TestFunctionalparallelMountCmdany-port2160266487/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759761631098316341" to /tmp/TestFunctionalparallelMountCmdany-port2160266487/001/test-1759761631098316341
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (292.9486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1006 14:40:31.391548  629719 retry.go:31] will retry after 636.300576ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh -- ls -la /mount-9p
I1006 14:40:32.521332  629719 retry.go:31] will retry after 3.430377622s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  6 14:40 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  6 14:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  6 14:40 test-1759761631098316341
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh cat /mount-9p/test-1759761631098316341
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-135520 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-135520 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (49.644907ms)

                                                
                                                
** stderr ** 
	E1006 14:40:32.911051  674178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	error: unable to recognize "testdata/busybox-mount-test.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-135520 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (289.771687ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=37537)
	total 2
	-rw-r--r-- 1 docker docker 24 Oct  6 14:40 created-by-test
	-rw-r--r-- 1 docker docker 24 Oct  6 14:40 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Oct  6 14:40 test-1759761631098316341
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-135520 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-135520 /tmp/TestFunctionalparallelMountCmdany-port2160266487/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-135520 /tmp/TestFunctionalparallelMountCmdany-port2160266487/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port2160266487/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:37537
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port2160266487/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-135520 /tmp/TestFunctionalparallelMountCmdany-port2160266487/001:/mount-9p --alsologtostderr -v=1] stderr:
I1006 14:40:31.150072  673520 out.go:360] Setting OutFile to fd 1 ...
I1006 14:40:31.150237  673520 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:31.150250  673520 out.go:374] Setting ErrFile to fd 2...
I1006 14:40:31.150257  673520 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:31.150491  673520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
I1006 14:40:31.150787  673520 mustload.go:65] Loading cluster: functional-135520
I1006 14:40:31.151106  673520 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:40:31.151646  673520 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
I1006 14:40:31.171194  673520 host.go:66] Checking if "functional-135520" exists ...
I1006 14:40:31.171549  673520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1006 14:40:31.244950  673520 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-06 14:40:31.234299316 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1006 14:40:31.245100  673520 cli_runner.go:164] Run: docker network inspect functional-135520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1006 14:40:31.266151  673520 out.go:179] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port2160266487/001 into VM as /mount-9p ...
I1006 14:40:31.267237  673520 out.go:179]   - Mount type:   9p
I1006 14:40:31.268230  673520 out.go:179]   - User ID:      docker
I1006 14:40:31.269266  673520 out.go:179]   - Group ID:     docker
I1006 14:40:31.270317  673520 out.go:179]   - Version:      9p2000.L
I1006 14:40:31.271225  673520 out.go:179]   - Message Size: 262144
I1006 14:40:31.272140  673520 out.go:179]   - Options:      map[]
I1006 14:40:31.273032  673520 out.go:179]   - Bind Address: 192.168.49.1:37537
I1006 14:40:31.273936  673520 out.go:179] * Userspace file server: 
I1006 14:40:31.274093  673520 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1006 14:40:31.274171  673520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
I1006 14:40:31.293995  673520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
I1006 14:40:31.396916  673520 mount.go:180] unmount for /mount-9p ran successfully
I1006 14:40:31.396946  673520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1006 14:40:31.405920  673520 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=37537,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1006 14:40:31.417609  673520 main.go:125] stdlog: ufs.go:141 connected
I1006 14:40:31.417797  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tversion tag 65535 msize 262144 version '9P2000.L'
I1006 14:40:31.417868  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rversion tag 65535 msize 262144 version '9P2000'
I1006 14:40:31.418081  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1006 14:40:31.418145  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rattach tag 0 aqid (20fa385 b9f78779 'd')
I1006 14:40:31.418397  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 0
I1006 14:40:31.418569  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa385 b9f78779 'd') m d775 at 0 mt 1759761631 l 4096 t 0 d 0 ext )
I1006 14:40:31.419872  673520 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/.mount-process: {Name:mkfd8c2801731b661b904acab25c00fdea0f7dbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1006 14:40:31.420090  673520 mount.go:105] mount successful: ""
I1006 14:40:31.421878  673520 out.go:179] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port2160266487/001 to /mount-9p
I1006 14:40:31.423030  673520 out.go:203] 
I1006 14:40:31.423987  673520 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1006 14:40:32.571157  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 0
I1006 14:40:32.571341  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa385 b9f78779 'd') m d775 at 0 mt 1759761631 l 4096 t 0 d 0 ext )
I1006 14:40:32.571701  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Twalk tag 0 fid 0 newfid 1 
I1006 14:40:32.571766  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rwalk tag 0 
I1006 14:40:32.571906  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Topen tag 0 fid 1 mode 0
I1006 14:40:32.571975  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Ropen tag 0 qid (20fa385 b9f78779 'd') iounit 0
I1006 14:40:32.572091  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 0
I1006 14:40:32.572243  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa385 b9f78779 'd') m d775 at 0 mt 1759761631 l 4096 t 0 d 0 ext )
I1006 14:40:32.572504  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tread tag 0 fid 1 offset 0 count 262120
I1006 14:40:32.572761  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rread tag 0 count 258
I1006 14:40:32.572905  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tread tag 0 fid 1 offset 258 count 261862
I1006 14:40:32.572938  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rread tag 0 count 0
I1006 14:40:32.573070  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tread tag 0 fid 1 offset 258 count 262120
I1006 14:40:32.573108  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rread tag 0 count 0
I1006 14:40:32.573252  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1006 14:40:32.573291  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rwalk tag 0 (20fa38a b9f78779 '') 
I1006 14:40:32.573409  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 2
I1006 14:40:32.573516  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa38a b9f78779 '') m 644 at 0 mt 1759761631 l 24 t 0 d 0 ext )
I1006 14:40:32.573642  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 2
I1006 14:40:32.573777  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa38a b9f78779 '') m 644 at 0 mt 1759761631 l 24 t 0 d 0 ext )
I1006 14:40:32.573912  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tclunk tag 0 fid 2
I1006 14:40:32.573951  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rclunk tag 0
I1006 14:40:32.574077  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1006 14:40:32.574126  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rwalk tag 0 (20fa389 b9f78779 '') 
I1006 14:40:32.574250  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 2
I1006 14:40:32.574331  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa389 b9f78779 '') m 644 at 0 mt 1759761631 l 24 t 0 d 0 ext )
I1006 14:40:32.574440  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 2
I1006 14:40:32.574522  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa389 b9f78779 '') m 644 at 0 mt 1759761631 l 24 t 0 d 0 ext )
I1006 14:40:32.574626  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tclunk tag 0 fid 2
I1006 14:40:32.574660  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rclunk tag 0
I1006 14:40:32.574759  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Twalk tag 0 fid 0 newfid 2 0:'test-1759761631098316341' 
I1006 14:40:32.574801  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rwalk tag 0 (20fa38b b9f78779 '') 
I1006 14:40:32.574892  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 2
I1006 14:40:32.574977  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('test-1759761631098316341' 'jenkins' 'balintp' '' q (20fa38b b9f78779 '') m 644 at 0 mt 1759761631 l 24 t 0 d 0 ext )
I1006 14:40:32.575079  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 2
I1006 14:40:32.575140  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('test-1759761631098316341' 'jenkins' 'balintp' '' q (20fa38b b9f78779 '') m 644 at 0 mt 1759761631 l 24 t 0 d 0 ext )
I1006 14:40:32.575255  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tclunk tag 0 fid 2
I1006 14:40:32.575284  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rclunk tag 0
I1006 14:40:32.575418  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tread tag 0 fid 1 offset 258 count 262120
I1006 14:40:32.575443  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rread tag 0 count 0
I1006 14:40:32.575573  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tclunk tag 0 fid 1
I1006 14:40:32.575614  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rclunk tag 0
I1006 14:40:32.853133  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Twalk tag 0 fid 0 newfid 1 0:'test-1759761631098316341' 
I1006 14:40:32.853234  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rwalk tag 0 (20fa38b b9f78779 '') 
I1006 14:40:32.853402  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 1
I1006 14:40:32.853541  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('test-1759761631098316341' 'jenkins' 'balintp' '' q (20fa38b b9f78779 '') m 644 at 0 mt 1759761631 l 24 t 0 d 0 ext )
I1006 14:40:32.853680  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Twalk tag 0 fid 1 newfid 2 
I1006 14:40:32.853732  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rwalk tag 0 
I1006 14:40:32.853853  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Topen tag 0 fid 2 mode 0
I1006 14:40:32.853899  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Ropen tag 0 qid (20fa38b b9f78779 '') iounit 0
I1006 14:40:32.854006  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 1
I1006 14:40:32.854100  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('test-1759761631098316341' 'jenkins' 'balintp' '' q (20fa38b b9f78779 '') m 644 at 0 mt 1759761631 l 24 t 0 d 0 ext )
I1006 14:40:32.854394  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tread tag 0 fid 2 offset 0 count 24
I1006 14:40:32.854445  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rread tag 0 count 24
I1006 14:40:32.854615  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tclunk tag 0 fid 2
I1006 14:40:32.854654  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rclunk tag 0
I1006 14:40:32.854798  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tclunk tag 0 fid 1
I1006 14:40:32.854832  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rclunk tag 0
I1006 14:40:33.193781  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 0
I1006 14:40:33.193933  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa385 b9f78779 'd') m d775 at 0 mt 1759761631 l 4096 t 0 d 0 ext )
I1006 14:40:33.194340  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Twalk tag 0 fid 0 newfid 1 
I1006 14:40:33.194401  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rwalk tag 0 
I1006 14:40:33.194506  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Topen tag 0 fid 1 mode 0
I1006 14:40:33.194562  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Ropen tag 0 qid (20fa385 b9f78779 'd') iounit 0
I1006 14:40:33.194684  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 0
I1006 14:40:33.194784  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa385 b9f78779 'd') m d775 at 0 mt 1759761631 l 4096 t 0 d 0 ext )
I1006 14:40:33.194987  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tread tag 0 fid 1 offset 0 count 262120
I1006 14:40:33.195154  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rread tag 0 count 258
I1006 14:40:33.195321  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tread tag 0 fid 1 offset 258 count 261862
I1006 14:40:33.195355  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rread tag 0 count 0
I1006 14:40:33.195534  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tread tag 0 fid 1 offset 258 count 262120
I1006 14:40:33.195578  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rread tag 0 count 0
I1006 14:40:33.195684  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1006 14:40:33.195715  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rwalk tag 0 (20fa38a b9f78779 '') 
I1006 14:40:33.195818  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 2
I1006 14:40:33.195909  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa38a b9f78779 '') m 644 at 0 mt 1759761631 l 24 t 0 d 0 ext )
I1006 14:40:33.196027  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 2
I1006 14:40:33.196108  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa38a b9f78779 '') m 644 at 0 mt 1759761631 l 24 t 0 d 0 ext )
I1006 14:40:33.196230  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tclunk tag 0 fid 2
I1006 14:40:33.196262  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rclunk tag 0
I1006 14:40:33.196390  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1006 14:40:33.196431  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rwalk tag 0 (20fa389 b9f78779 '') 
I1006 14:40:33.196528  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 2
I1006 14:40:33.196618  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa389 b9f78779 '') m 644 at 0 mt 1759761631 l 24 t 0 d 0 ext )
I1006 14:40:33.196727  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 2
I1006 14:40:33.196832  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa389 b9f78779 '') m 644 at 0 mt 1759761631 l 24 t 0 d 0 ext )
I1006 14:40:33.196975  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tclunk tag 0 fid 2
I1006 14:40:33.197002  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rclunk tag 0
I1006 14:40:33.197130  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Twalk tag 0 fid 0 newfid 2 0:'test-1759761631098316341' 
I1006 14:40:33.197176  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rwalk tag 0 (20fa38b b9f78779 '') 
I1006 14:40:33.197336  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 2
I1006 14:40:33.197434  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('test-1759761631098316341' 'jenkins' 'balintp' '' q (20fa38b b9f78779 '') m 644 at 0 mt 1759761631 l 24 t 0 d 0 ext )
I1006 14:40:33.197590  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tstat tag 0 fid 2
I1006 14:40:33.197672  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rstat tag 0 st ('test-1759761631098316341' 'jenkins' 'balintp' '' q (20fa38b b9f78779 '') m 644 at 0 mt 1759761631 l 24 t 0 d 0 ext )
I1006 14:40:33.197796  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tclunk tag 0 fid 2
I1006 14:40:33.197826  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rclunk tag 0
I1006 14:40:33.197975  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tread tag 0 fid 1 offset 258 count 262120
I1006 14:40:33.198019  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rread tag 0 count 0
I1006 14:40:33.198143  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tclunk tag 0 fid 1
I1006 14:40:33.198195  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rclunk tag 0
I1006 14:40:33.199239  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1006 14:40:33.199293  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rerror tag 0 ename 'file not found' ecode 0
I1006 14:40:33.477252  673520 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:46984 Tclunk tag 0 fid 0
I1006 14:40:33.477309  673520 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:46984 Rclunk tag 0
I1006 14:40:33.477678  673520 main.go:125] stdlog: ufs.go:147 disconnected
I1006 14:40:33.494465  673520 out.go:179] * Unmounting /mount-9p ...
I1006 14:40:33.495505  673520 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1006 14:40:33.503788  673520 mount.go:180] unmount for /mount-9p ran successfully
I1006 14:40:33.503879  673520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/.mount-process: {Name:mkfd8c2801731b661b904acab25c00fdea0f7dbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1006 14:40:33.505081  673520 out.go:203] 
W1006 14:40:33.505995  673520 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1006 14:40:33.506893  673520 out.go:203] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-135520
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image load --daemon kicbase/echo-server:functional-135520 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-135520" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image save kicbase/echo-server:functional-135520 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1006 14:40:34.082609  674918 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:40:34.083602  674918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:40:34.083615  674918 out.go:374] Setting ErrFile to fd 2...
	I1006 14:40:34.083619  674918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:40:34.083846  674918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:40:34.084439  674918 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:40:34.084536  674918 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:40:34.084926  674918 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
	I1006 14:40:34.103681  674918 ssh_runner.go:195] Run: systemctl --version
	I1006 14:40:34.103757  674918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
	I1006 14:40:34.122117  674918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
	I1006 14:40:34.224213  674918 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1006 14:40:34.224274  674918 cache_images.go:254] Failed to load cached images for "functional-135520": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1006 14:40:34.224295  674918 cache_images.go:266] failed pushing to: functional-135520

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-135520
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image save --daemon kicbase/echo-server:functional-135520 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-135520
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-135520: exit status 1 (19.878751ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-135520

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-135520

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-135520 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-135520 create deployment hello-node --image kicbase/echo-server: exit status 1 (51.790179ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-135520 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 service list: exit status 103 (279.562008ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-135520 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-135520"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-amd64 -p functional-135520 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-135520 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-135520\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 service list -o json: exit status 103 (270.813398ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-135520 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-135520"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-amd64 -p functional-135520 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 service --namespace=default --https --url hello-node: exit status 103 (263.797533ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-135520 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-135520"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-135520 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 service hello-node --url --format={{.IP}}: exit status 103 (291.2962ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-135520 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-135520"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-135520 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-135520 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-135520\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 service hello-node --url: exit status 103 (268.483169ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-135520 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-135520"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-135520 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-135520 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-135520"
functional_test.go:1579: failed to parse "* The control-plane node functional-135520 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-135520\"": parse "* The control-plane node functional-135520 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-135520\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (501.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1006 14:45:30.521106  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:45:30.527530  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:45:30.538981  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:45:30.560471  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:45:30.601910  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:45:30.683402  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:45:30.844994  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:45:31.166763  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:45:31.808867  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:45:33.090549  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:45:35.653368  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:45:40.775040  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:45:51.017050  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:46:11.498955  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:46:52.461370  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:48:14.383239  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:50:30.512170  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:50:58.231723  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (8m20.615806626s)

                                                
                                                
-- stdout --
	* [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:44:34.230587  682995 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:44:34.230719  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230728  682995 out.go:374] Setting ErrFile to fd 2...
	I1006 14:44:34.230733  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230969  682995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:44:34.231523  682995 out.go:368] Setting JSON to false
	I1006 14:44:34.232538  682995 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19610,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:44:34.232651  682995 start.go:140] virtualization: kvm guest
	I1006 14:44:34.235278  682995 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:44:34.236668  682995 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:44:34.236708  682995 notify.go:220] Checking for updates...
	I1006 14:44:34.239256  682995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:44:34.240475  682995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:44:34.242249  682995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:44:34.243577  682995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:44:34.244737  682995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:44:34.246267  682995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:44:34.271626  682995 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:44:34.271783  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.334697  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.323928193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.334819  682995 docker.go:318] overlay module found
	I1006 14:44:34.336770  682995 out.go:179] * Using the docker driver based on user configuration
	I1006 14:44:34.338109  682995 start.go:304] selected driver: docker
	I1006 14:44:34.338130  682995 start.go:924] validating driver "docker" against <nil>
	I1006 14:44:34.338144  682995 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:44:34.338750  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.398314  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.387376197 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.398587  682995 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:44:34.399080  682995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:44:34.401095  682995 out.go:179] * Using Docker driver with root privileges
	I1006 14:44:34.402283  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:34.402367  682995 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1006 14:44:34.402383  682995 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 14:44:34.402476  682995 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1006 14:44:34.403829  682995 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:44:34.404899  682995 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:44:34.406166  682995 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:44:34.407227  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.407272  682995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:44:34.407284  682995 cache.go:58] Caching tarball of preloaded images
	I1006 14:44:34.407376  682995 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:44:34.407382  682995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:44:34.407387  682995 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:44:34.407757  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:34.407793  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json: {Name:mkefd90ec0b9eae63c82d60bab053cdf7b5d9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:34.429193  682995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:44:34.429233  682995 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:44:34.429254  682995 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:44:34.429296  682995 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:44:34.429397  682995 start.go:364] duration metric: took 84.055µs to acquireMachinesLock for "ha-481559"
	I1006 14:44:34.429421  682995 start.go:93] Provisioning new machine with config: &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:44:34.429503  682995 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:44:34.431456  682995 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 14:44:34.431692  682995 start.go:159] libmachine.API.Create for "ha-481559" (driver="docker")
	I1006 14:44:34.431725  682995 client.go:168] LocalClient.Create starting
	I1006 14:44:34.431791  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem
	I1006 14:44:34.431825  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431843  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.431939  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem
	I1006 14:44:34.431977  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431994  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.432416  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:44:34.449965  682995 cli_runner.go:211] docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:44:34.450053  682995 network_create.go:284] running [docker network inspect ha-481559] to gather additional debugging logs...
	I1006 14:44:34.450071  682995 cli_runner.go:164] Run: docker network inspect ha-481559
	W1006 14:44:34.468682  682995 cli_runner.go:211] docker network inspect ha-481559 returned with exit code 1
	I1006 14:44:34.468713  682995 network_create.go:287] error running [docker network inspect ha-481559]: docker network inspect ha-481559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-481559 not found
	I1006 14:44:34.468724  682995 network_create.go:289] output of [docker network inspect ha-481559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-481559 not found
	
	** /stderr **
	I1006 14:44:34.468902  682995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:34.488223  682995 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca2540}
	I1006 14:44:34.488276  682995 network_create.go:124] attempt to create docker network ha-481559 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:44:34.488338  682995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-481559 ha-481559
	I1006 14:44:34.548630  682995 network_create.go:108] docker network ha-481559 192.168.49.0/24 created
	I1006 14:44:34.548669  682995 kic.go:121] calculated static IP "192.168.49.2" for the "ha-481559" container
	I1006 14:44:34.548729  682995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:44:34.566959  682995 cli_runner.go:164] Run: docker volume create ha-481559 --label name.minikube.sigs.k8s.io=ha-481559 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:44:34.586001  682995 oci.go:103] Successfully created a docker volume ha-481559
	I1006 14:44:34.586088  682995 cli_runner.go:164] Run: docker run --rm --name ha-481559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --entrypoint /usr/bin/test -v ha-481559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:44:34.994169  682995 oci.go:107] Successfully prepared a docker volume ha-481559
	I1006 14:44:34.994233  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.994280  682995 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:44:34.994349  682995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:44:39.551248  682995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.556814521s)
	I1006 14:44:39.551287  682995 kic.go:203] duration metric: took 4.557022471s to extract preloaded images to volume ...
	W1006 14:44:39.551374  682995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1006 14:44:39.551406  682995 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1006 14:44:39.551451  682995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:44:39.608040  682995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-481559 --name ha-481559 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-481559 --network ha-481559 --ip 192.168.49.2 --volume ha-481559:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:44:39.865946  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Running}}
	I1006 14:44:39.883061  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:39.901066  682995 cli_runner.go:164] Run: docker exec ha-481559 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:44:39.951869  682995 oci.go:144] the created container "ha-481559" has a running status.
	I1006 14:44:39.951908  682995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa...
	I1006 14:44:40.176341  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 14:44:40.176392  682995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:44:40.205643  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.227924  682995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:44:40.227948  682995 kic_runner.go:114] Args: [docker exec --privileged ha-481559 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:44:40.277808  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.297063  682995 machine.go:93] provisionDockerMachine start ...
	I1006 14:44:40.297156  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.315828  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.316109  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.316124  682995 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:44:40.461735  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.461771  682995 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:44:40.461843  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.481222  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.481551  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.481575  682995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:44:40.636624  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.636709  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.655017  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.655283  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.655302  682995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:44:40.801276  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:44:40.801313  682995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:44:40.801332  682995 ubuntu.go:190] setting up certificates
	I1006 14:44:40.801344  682995 provision.go:84] configureAuth start
	I1006 14:44:40.801398  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:40.819000  682995 provision.go:143] copyHostCerts
	I1006 14:44:40.819052  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819089  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:44:40.819099  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819169  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:44:40.819281  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819304  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:44:40.819309  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819338  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:44:40.819400  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819416  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:44:40.819428  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819460  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:44:40.819525  682995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:44:40.896257  682995 provision.go:177] copyRemoteCerts
	I1006 14:44:40.896328  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:44:40.896370  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.914092  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.016898  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:44:41.016969  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:44:41.037131  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:44:41.037215  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:44:41.055180  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:44:41.055258  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:44:41.073045  682995 provision.go:87] duration metric: took 271.684433ms to configureAuth
	I1006 14:44:41.073074  682995 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:44:41.073312  682995 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:44:41.073456  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.092548  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:41.092838  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:41.092869  682995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:44:41.356221  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:44:41.356247  682995 machine.go:96] duration metric: took 1.059160507s to provisionDockerMachine
	I1006 14:44:41.356259  682995 client.go:171] duration metric: took 6.924524382s to LocalClient.Create
	I1006 14:44:41.356282  682995 start.go:167] duration metric: took 6.924591304s to libmachine.API.Create "ha-481559"
	I1006 14:44:41.356295  682995 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:44:41.356322  682995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:44:41.356396  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:44:41.356453  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.374424  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.479545  682995 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:44:41.483318  682995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:44:41.483345  682995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:44:41.483356  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:44:41.483402  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:44:41.483499  682995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:44:41.483510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:44:41.483603  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:44:41.491409  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:41.511609  682995 start.go:296] duration metric: took 155.29938ms for postStartSetup
	I1006 14:44:41.511914  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.529867  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:41.530158  682995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:44:41.530223  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.547995  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.647810  682995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:44:41.652637  682995 start.go:128] duration metric: took 7.223117194s to createHost
	I1006 14:44:41.652662  682995 start.go:83] releasing machines lock for "ha-481559", held for 7.223254897s
	I1006 14:44:41.652730  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.670486  682995 ssh_runner.go:195] Run: cat /version.json
	I1006 14:44:41.670511  682995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:44:41.670555  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.670581  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.689278  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.689801  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.845142  682995 ssh_runner.go:195] Run: systemctl --version
	I1006 14:44:41.852333  682995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:44:41.886799  682995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:44:41.891575  682995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:44:41.891645  682995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:44:41.918020  682995 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 14:44:41.918049  682995 start.go:495] detecting cgroup driver to use...
	I1006 14:44:41.918088  682995 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:44:41.918148  682995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:44:41.934827  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:44:41.946573  682995 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:44:41.946626  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:44:41.961811  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:44:41.978333  682995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:44:42.056893  682995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:44:42.140645  682995 docker.go:234] disabling docker service ...
	I1006 14:44:42.140713  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:44:42.159372  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:44:42.171857  682995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:44:42.255908  682995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:44:42.340081  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:44:42.352916  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:44:42.367142  682995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:44:42.367215  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.377866  682995 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:44:42.377939  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.387157  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.395944  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.404768  682995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:44:42.412712  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.420910  682995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.434108  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.442895  682995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:44:42.450289  682995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:44:42.457667  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:42.535385  682995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:44:42.643348  682995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:44:42.643424  682995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:44:42.647404  682995 start.go:563] Will wait 60s for crictl version
	I1006 14:44:42.647467  682995 ssh_runner.go:195] Run: which crictl
	I1006 14:44:42.651000  682995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:44:42.675962  682995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:44:42.676044  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.705541  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.736773  682995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:44:42.738090  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:42.754892  682995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:44:42.759274  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.770415  682995 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:44:42.770534  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:42.770581  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.805187  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.805221  682995 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:44:42.805274  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.831096  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.831123  682995 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:44:42.831132  682995 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:44:42.831244  682995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:44:42.831321  682995 ssh_runner.go:195] Run: crio config
	I1006 14:44:42.877768  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:42.877790  682995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:44:42.877819  682995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:44:42.877840  682995 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:44:42.877966  682995 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:44:42.877993  682995 kube-vip.go:115] generating kube-vip config ...
	I1006 14:44:42.878035  682995 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1006 14:44:42.890886  682995 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:44:42.890995  682995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1006 14:44:42.891046  682995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:44:42.899063  682995 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:44:42.899132  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1006 14:44:42.906926  682995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:44:42.919358  682995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:44:42.934141  682995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:44:42.945961  682995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1006 14:44:42.959489  682995 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1006 14:44:42.962953  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.972760  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:43.053996  682995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:44:43.077665  682995 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:44:43.077692  682995 certs.go:195] generating shared ca certs ...
	I1006 14:44:43.077714  682995 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.077856  682995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:44:43.077899  682995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:44:43.077909  682995 certs.go:257] generating profile certs ...
	I1006 14:44:43.077963  682995 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:44:43.077983  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt with IP's: []
	I1006 14:44:43.259387  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt ...
	I1006 14:44:43.259418  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt: {Name:mk058803c7a7f0f2aa3fb547a3aafbba9518c3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.259607  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key ...
	I1006 14:44:43.259619  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key: {Name:mk0ae3492597f7c1edf0d7262770452fa244a40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.265151  682995 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710
	I1006 14:44:43.265175  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1006 14:44:43.807062  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 ...
	I1006 14:44:43.807095  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710: {Name:mk30dd14f07a4b732bb60853cc2fd5f84f73e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807283  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 ...
	I1006 14:44:43.807298  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710: {Name:mkf3f5fbdf7957143c03cb611320a2e02acb94c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807374  682995 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:44:43.807489  682995 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:44:43.807558  682995 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:44:43.807574  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt with IP's: []
	I1006 14:44:43.994115  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt ...
	I1006 14:44:43.994149  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt: {Name:mk715c6902e25626016d7eb8fdb7b52f0fdce895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994338  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key ...
	I1006 14:44:43.994350  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key: {Name:mka438ddf42b96ca34511dda1ce60f08f1d48b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994429  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:44:43.994449  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:44:43.994460  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:44:43.994470  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:44:43.994480  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:44:43.994490  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:44:43.994510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:44:43.994522  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:44:43.994570  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:44:43.994617  682995 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:44:43.994630  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:44:43.994653  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:44:43.994674  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:44:43.994701  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:44:43.994739  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:43.994772  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:44:43.994786  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:43.994798  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:44:43.995423  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:44:44.014422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:44:44.032422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:44:44.050727  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:44:44.068490  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:44:44.085540  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:44:44.102941  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:44:44.121043  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:44:44.139583  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:44:44.159654  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:44:44.176939  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:44:44.194332  682995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:44:44.207641  682995 ssh_runner.go:195] Run: openssl version
	I1006 14:44:44.214349  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:44:44.223426  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227339  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227401  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.261578  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:44:44.270472  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:44:44.279083  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282749  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282813  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.316484  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:44:44.325228  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:44:44.334098  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.337988  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.338051  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.371914  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:44:44.380847  682995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:44:44.384643  682995 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:44:44.384694  682995 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:44:44.384758  682995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:44:44.384823  682995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:44:44.413083  682995 cri.go:89] found id: ""
	I1006 14:44:44.413145  682995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:44:44.421446  682995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:44:44.429380  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:44:44.429431  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:44:44.437643  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:44:44.437667  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:44:44.437726  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:44:44.445948  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:44:44.446021  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:44:44.453451  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:44:44.460986  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:44:44.461064  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:44:44.468259  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.475830  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:44:44.475882  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.483080  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:44:44.490569  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:44:44.490632  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:44:44.498056  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:44:44.560210  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:44:44.618315  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:48:49.762009  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:48:49.762136  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:48:49.765019  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:49.765065  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:49.765142  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:49.765192  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:49.765263  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:49.765329  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:49.765384  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:49.765424  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:49.765465  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:49.765507  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:49.765557  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:49.765644  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:49.765713  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:49.765816  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:49.765897  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:49.765974  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:49.766033  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:49.768189  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:49.768304  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:49.768391  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:49.768495  682995 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:48:49.768546  682995 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:48:49.768600  682995 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:48:49.768641  682995 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:48:49.768684  682995 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:48:49.768778  682995 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.768847  682995 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:48:49.768982  682995 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.769042  682995 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:48:49.769108  682995 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:48:49.769166  682995 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:48:49.769263  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:49.769339  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:49.769416  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:49.769489  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:49.769549  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:49.769601  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:49.769671  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:49.769753  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:49.771489  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:49.771577  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:49.771664  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:49.771742  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:49.771858  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:49.771974  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:49.772108  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:49.772220  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:49.772288  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:49.772413  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:49.772556  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:49.772647  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501252368s
	I1006 14:48:49.772772  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:49.772891  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:49.772971  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:49.773033  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:48:49.773108  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	I1006 14:48:49.773189  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	I1006 14:48:49.773304  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	I1006 14:48:49.773319  682995 kubeadm.go:318] 
	I1006 14:48:49.773407  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:48:49.773472  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:48:49.773545  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:48:49.773657  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:48:49.773771  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:48:49.773850  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:48:49.773891  682995 kubeadm.go:318] 
	W1006 14:48:49.774048  682995 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501252368s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501252368s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:48:49.774147  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:48:52.524900  682995 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.75072398s)
	I1006 14:48:52.524985  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:48:52.538104  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:48:52.538173  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:48:52.546610  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:48:52.546639  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:48:52.546692  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:48:52.555271  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:48:52.555334  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:48:52.564502  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:48:52.572861  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:48:52.572925  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:48:52.580681  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.588574  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:48:52.588636  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.596314  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:48:52.604007  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:48:52.604073  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:48:52.611967  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:48:52.650794  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:52.650844  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:52.671446  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:52.671559  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:52.671628  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:52.671718  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:52.671766  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:52.671811  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:52.671850  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:52.671890  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:52.671928  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:52.671972  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:52.672010  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:52.732758  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:52.732876  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:52.732979  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:52.739914  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:52.743428  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:52.743535  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:52.743654  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:52.743727  682995 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:48:52.743777  682995 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:48:52.743861  682995 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:48:52.743911  682995 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:48:52.743985  682995 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:48:52.744055  682995 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:48:52.744143  682995 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:48:52.744228  682995 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:48:52.744266  682995 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:48:52.744323  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:53.107297  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:53.300701  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:53.503166  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:53.664024  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:53.725865  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:53.726293  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:53.728797  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:53.730586  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:53.730720  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:53.730830  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:53.730903  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:53.744534  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:53.744672  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:53.752267  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:53.752422  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:53.752505  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:53.852049  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:53.852226  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:54.353729  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.825241ms
	I1006 14:48:54.356542  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:54.356619  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:54.356695  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:54.356819  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:52:54.358331  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	I1006 14:52:54.358653  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	I1006 14:52:54.358853  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	I1006 14:52:54.358881  682995 kubeadm.go:318] 
	I1006 14:52:54.359059  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:52:54.359298  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:52:54.359539  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:52:54.359760  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:52:54.359952  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:52:54.360116  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:52:54.360148  682995 kubeadm.go:318] 
	I1006 14:52:54.363033  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:52:54.363163  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:52:54.363696  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:52:54.363761  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:52:54.363858  682995 kubeadm.go:402] duration metric: took 8m9.979166519s to StartCluster
	I1006 14:52:54.363946  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:52:54.364031  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:52:54.392579  682995 cri.go:89] found id: ""
	I1006 14:52:54.392622  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.392631  682995 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:52:54.392638  682995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:52:54.392693  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:52:54.420188  682995 cri.go:89] found id: ""
	I1006 14:52:54.420226  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.420237  682995 logs.go:284] No container was found matching "etcd"
	I1006 14:52:54.420245  682995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:52:54.420299  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:52:54.445694  682995 cri.go:89] found id: ""
	I1006 14:52:54.445723  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.445733  682995 logs.go:284] No container was found matching "coredns"
	I1006 14:52:54.445740  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:52:54.445791  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:52:54.471923  682995 cri.go:89] found id: ""
	I1006 14:52:54.471954  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.471962  682995 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:52:54.471971  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:52:54.472030  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:52:54.498805  682995 cri.go:89] found id: ""
	I1006 14:52:54.498836  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.498848  682995 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:52:54.498857  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:52:54.498922  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:52:54.524613  682995 cri.go:89] found id: ""
	I1006 14:52:54.524638  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.524646  682995 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:52:54.524652  682995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:52:54.524708  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:52:54.551140  682995 cri.go:89] found id: ""
	I1006 14:52:54.551170  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.551181  682995 logs.go:284] No container was found matching "kindnet"
	I1006 14:52:54.551194  682995 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:52:54.551220  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:52:54.615573  682995 logs.go:123] Gathering logs for container status ...
	I1006 14:52:54.615607  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:52:54.645703  682995 logs.go:123] Gathering logs for kubelet ...
	I1006 14:52:54.645732  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:52:54.709506  682995 logs.go:123] Gathering logs for dmesg ...
	I1006 14:52:54.709543  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:52:54.722963  682995 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:52:54.722997  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:52:54.783016  682995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1006 14:52:54.783054  682995 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:52:54.783107  682995 out.go:285] * 
	* 
	W1006 14:52:54.783182  682995 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.783200  682995 out.go:285] * 
	* 
	W1006 14:52:54.785658  682995 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:52:54.789273  682995 out.go:203] 
	W1006 14:52:54.790573  682995 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.790604  682995 out.go:285] * 
	* 
	I1006 14:52:54.791821  682995 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-481559 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 683567,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:44:39.660699919Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7effae92997970d320561b0b86c210815b18a55d65bd555e1bff50158ed38adc",
	            "SandboxKey": "/var/run/docker/netns/7effae929979",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32883"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32884"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:f3:45:3f:5b:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "b8540561692606ad815fcdb4502c1e36a16141413d3697f4cf48668502930e4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 6 (294.561959ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:52:55.137839  688208 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-135520 ssh findmnt -T /mount1                                                                        │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh            │ functional-135520 ssh findmnt -T /mount2                                                                        │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh            │ functional-135520 ssh findmnt -T /mount3                                                                        │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ mount          │ -p functional-135520 --kill=true                                                                                │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service        │ functional-135520 service list                                                                                  │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service        │ functional-135520 service list -o json                                                                          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service        │ functional-135520 service --namespace=default --https --url hello-node                                          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service        │ functional-135520 service hello-node --url --format={{.IP}}                                                     │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ service        │ functional-135520 service hello-node --url                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ start          │ -p functional-135520 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ start          │ -p functional-135520 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio       │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ start          │ -p functional-135520 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                 │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-135520 --alsologtostderr -v=1                                                  │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ update-context │ functional-135520 update-context --alsologtostderr -v=2                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ update-context │ functional-135520 update-context --alsologtostderr -v=2                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ update-context │ functional-135520 update-context --alsologtostderr -v=2                                                         │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image          │ functional-135520 image ls --format short --alsologtostderr                                                     │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image          │ functional-135520 image ls --format json --alsologtostderr                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image          │ functional-135520 image ls --format table --alsologtostderr                                                     │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image          │ functional-135520 image ls --format yaml --alsologtostderr                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh            │ functional-135520 ssh pgrep buildkitd                                                                           │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image          │ functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image          │ functional-135520 image ls                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ delete         │ -p functional-135520                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │ 06 Oct 25 14:44 UTC │
	│ start          │ ha-481559 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:44:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:44:34.230587  682995 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:44:34.230719  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230728  682995 out.go:374] Setting ErrFile to fd 2...
	I1006 14:44:34.230733  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230969  682995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:44:34.231523  682995 out.go:368] Setting JSON to false
	I1006 14:44:34.232538  682995 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19610,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:44:34.232651  682995 start.go:140] virtualization: kvm guest
	I1006 14:44:34.235278  682995 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:44:34.236668  682995 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:44:34.236708  682995 notify.go:220] Checking for updates...
	I1006 14:44:34.239256  682995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:44:34.240475  682995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:44:34.242249  682995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:44:34.243577  682995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:44:34.244737  682995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:44:34.246267  682995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:44:34.271626  682995 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:44:34.271783  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.334697  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.323928193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.334819  682995 docker.go:318] overlay module found
	I1006 14:44:34.336770  682995 out.go:179] * Using the docker driver based on user configuration
	I1006 14:44:34.338109  682995 start.go:304] selected driver: docker
	I1006 14:44:34.338130  682995 start.go:924] validating driver "docker" against <nil>
	I1006 14:44:34.338144  682995 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:44:34.338750  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.398314  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.387376197 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.398587  682995 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:44:34.399080  682995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:44:34.401095  682995 out.go:179] * Using Docker driver with root privileges
	I1006 14:44:34.402283  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:34.402367  682995 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1006 14:44:34.402383  682995 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 14:44:34.402476  682995 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1006 14:44:34.403829  682995 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:44:34.404899  682995 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:44:34.406166  682995 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:44:34.407227  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.407272  682995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:44:34.407284  682995 cache.go:58] Caching tarball of preloaded images
	I1006 14:44:34.407376  682995 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:44:34.407382  682995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:44:34.407387  682995 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:44:34.407757  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:34.407793  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json: {Name:mkefd90ec0b9eae63c82d60bab053cdf7b5d9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:34.429193  682995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:44:34.429233  682995 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:44:34.429254  682995 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:44:34.429296  682995 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:44:34.429397  682995 start.go:364] duration metric: took 84.055µs to acquireMachinesLock for "ha-481559"
	I1006 14:44:34.429421  682995 start.go:93] Provisioning new machine with config: &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:44:34.429503  682995 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:44:34.431456  682995 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 14:44:34.431692  682995 start.go:159] libmachine.API.Create for "ha-481559" (driver="docker")
	I1006 14:44:34.431725  682995 client.go:168] LocalClient.Create starting
	I1006 14:44:34.431791  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem
	I1006 14:44:34.431825  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431843  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.431939  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem
	I1006 14:44:34.431977  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431994  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.432416  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:44:34.449965  682995 cli_runner.go:211] docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:44:34.450053  682995 network_create.go:284] running [docker network inspect ha-481559] to gather additional debugging logs...
	I1006 14:44:34.450071  682995 cli_runner.go:164] Run: docker network inspect ha-481559
	W1006 14:44:34.468682  682995 cli_runner.go:211] docker network inspect ha-481559 returned with exit code 1
	I1006 14:44:34.468713  682995 network_create.go:287] error running [docker network inspect ha-481559]: docker network inspect ha-481559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-481559 not found
	I1006 14:44:34.468724  682995 network_create.go:289] output of [docker network inspect ha-481559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-481559 not found
	
	** /stderr **
	I1006 14:44:34.468902  682995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:34.488223  682995 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca2540}
	I1006 14:44:34.488276  682995 network_create.go:124] attempt to create docker network ha-481559 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:44:34.488338  682995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-481559 ha-481559
	I1006 14:44:34.548630  682995 network_create.go:108] docker network ha-481559 192.168.49.0/24 created
	I1006 14:44:34.548669  682995 kic.go:121] calculated static IP "192.168.49.2" for the "ha-481559" container
	I1006 14:44:34.548729  682995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:44:34.566959  682995 cli_runner.go:164] Run: docker volume create ha-481559 --label name.minikube.sigs.k8s.io=ha-481559 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:44:34.586001  682995 oci.go:103] Successfully created a docker volume ha-481559
	I1006 14:44:34.586088  682995 cli_runner.go:164] Run: docker run --rm --name ha-481559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --entrypoint /usr/bin/test -v ha-481559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:44:34.994169  682995 oci.go:107] Successfully prepared a docker volume ha-481559
	I1006 14:44:34.994233  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.994280  682995 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:44:34.994349  682995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:44:39.551248  682995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.556814521s)
	I1006 14:44:39.551287  682995 kic.go:203] duration metric: took 4.557022471s to extract preloaded images to volume ...
	W1006 14:44:39.551374  682995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1006 14:44:39.551406  682995 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1006 14:44:39.551451  682995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:44:39.608040  682995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-481559 --name ha-481559 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-481559 --network ha-481559 --ip 192.168.49.2 --volume ha-481559:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:44:39.865946  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Running}}
	I1006 14:44:39.883061  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:39.901066  682995 cli_runner.go:164] Run: docker exec ha-481559 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:44:39.951869  682995 oci.go:144] the created container "ha-481559" has a running status.
	I1006 14:44:39.951908  682995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa...
	I1006 14:44:40.176341  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 14:44:40.176392  682995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:44:40.205643  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.227924  682995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:44:40.227948  682995 kic_runner.go:114] Args: [docker exec --privileged ha-481559 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:44:40.277808  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.297063  682995 machine.go:93] provisionDockerMachine start ...
	I1006 14:44:40.297156  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.315828  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.316109  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.316124  682995 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:44:40.461735  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.461771  682995 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:44:40.461843  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.481222  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.481551  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.481575  682995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:44:40.636624  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.636709  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.655017  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.655283  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.655302  682995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:44:40.801276  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:44:40.801313  682995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:44:40.801332  682995 ubuntu.go:190] setting up certificates
	I1006 14:44:40.801344  682995 provision.go:84] configureAuth start
	I1006 14:44:40.801398  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:40.819000  682995 provision.go:143] copyHostCerts
	I1006 14:44:40.819052  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819089  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:44:40.819099  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819169  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:44:40.819281  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819304  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:44:40.819309  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819338  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:44:40.819400  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819416  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:44:40.819428  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819460  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:44:40.819525  682995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:44:40.896257  682995 provision.go:177] copyRemoteCerts
	I1006 14:44:40.896328  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:44:40.896370  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.914092  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.016898  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:44:41.016969  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:44:41.037131  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:44:41.037215  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:44:41.055180  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:44:41.055258  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:44:41.073045  682995 provision.go:87] duration metric: took 271.684433ms to configureAuth
	I1006 14:44:41.073074  682995 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:44:41.073312  682995 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:44:41.073456  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.092548  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:41.092838  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:41.092869  682995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:44:41.356221  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:44:41.356247  682995 machine.go:96] duration metric: took 1.059160507s to provisionDockerMachine
	I1006 14:44:41.356259  682995 client.go:171] duration metric: took 6.924524382s to LocalClient.Create
	I1006 14:44:41.356282  682995 start.go:167] duration metric: took 6.924591304s to libmachine.API.Create "ha-481559"
	I1006 14:44:41.356295  682995 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:44:41.356322  682995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:44:41.356396  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:44:41.356453  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.374424  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.479545  682995 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:44:41.483318  682995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:44:41.483345  682995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:44:41.483356  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:44:41.483402  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:44:41.483499  682995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:44:41.483510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:44:41.483603  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:44:41.491409  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:41.511609  682995 start.go:296] duration metric: took 155.29938ms for postStartSetup
	I1006 14:44:41.511914  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.529867  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:41.530158  682995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:44:41.530223  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.547995  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.647810  682995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:44:41.652637  682995 start.go:128] duration metric: took 7.223117194s to createHost
	I1006 14:44:41.652662  682995 start.go:83] releasing machines lock for "ha-481559", held for 7.223254897s
	I1006 14:44:41.652730  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.670486  682995 ssh_runner.go:195] Run: cat /version.json
	I1006 14:44:41.670511  682995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:44:41.670555  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.670581  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.689278  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.689801  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.845142  682995 ssh_runner.go:195] Run: systemctl --version
	I1006 14:44:41.852333  682995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:44:41.886799  682995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:44:41.891575  682995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:44:41.891645  682995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:44:41.918020  682995 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 14:44:41.918049  682995 start.go:495] detecting cgroup driver to use...
	I1006 14:44:41.918088  682995 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:44:41.918148  682995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:44:41.934827  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:44:41.946573  682995 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:44:41.946626  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:44:41.961811  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:44:41.978333  682995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:44:42.056893  682995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:44:42.140645  682995 docker.go:234] disabling docker service ...
	I1006 14:44:42.140713  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:44:42.159372  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:44:42.171857  682995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:44:42.255908  682995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:44:42.340081  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:44:42.352916  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:44:42.367142  682995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:44:42.367215  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.377866  682995 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:44:42.377939  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.387157  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.395944  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.404768  682995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:44:42.412712  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.420910  682995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.434108  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.442895  682995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:44:42.450289  682995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:44:42.457667  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:42.535385  682995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:44:42.643348  682995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:44:42.643424  682995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:44:42.647404  682995 start.go:563] Will wait 60s for crictl version
	I1006 14:44:42.647467  682995 ssh_runner.go:195] Run: which crictl
	I1006 14:44:42.651000  682995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:44:42.675962  682995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:44:42.676044  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.705541  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.736773  682995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:44:42.738090  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:42.754892  682995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:44:42.759274  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.770415  682995 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:44:42.770534  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:42.770581  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.805187  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.805221  682995 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:44:42.805274  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.831096  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.831123  682995 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:44:42.831132  682995 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:44:42.831244  682995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:44:42.831321  682995 ssh_runner.go:195] Run: crio config
	I1006 14:44:42.877768  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:42.877790  682995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:44:42.877819  682995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:44:42.877840  682995 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:44:42.877966  682995 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:44:42.877993  682995 kube-vip.go:115] generating kube-vip config ...
	I1006 14:44:42.878035  682995 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1006 14:44:42.890886  682995 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:44:42.890995  682995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1006 14:44:42.891046  682995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:44:42.899063  682995 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:44:42.899132  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1006 14:44:42.906926  682995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:44:42.919358  682995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:44:42.934141  682995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:44:42.945961  682995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1006 14:44:42.959489  682995 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1006 14:44:42.962953  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.972760  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:43.053996  682995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:44:43.077665  682995 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:44:43.077692  682995 certs.go:195] generating shared ca certs ...
	I1006 14:44:43.077714  682995 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.077856  682995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:44:43.077899  682995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:44:43.077909  682995 certs.go:257] generating profile certs ...
	I1006 14:44:43.077963  682995 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:44:43.077983  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt with IP's: []
	I1006 14:44:43.259387  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt ...
	I1006 14:44:43.259418  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt: {Name:mk058803c7a7f0f2aa3fb547a3aafbba9518c3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.259607  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key ...
	I1006 14:44:43.259619  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key: {Name:mk0ae3492597f7c1edf0d7262770452fa244a40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.265151  682995 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710
	I1006 14:44:43.265175  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1006 14:44:43.807062  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 ...
	I1006 14:44:43.807095  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710: {Name:mk30dd14f07a4b732bb60853cc2fd5f84f73e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807283  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 ...
	I1006 14:44:43.807298  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710: {Name:mkf3f5fbdf7957143c03cb611320a2e02acb94c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807374  682995 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:44:43.807489  682995 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:44:43.807558  682995 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:44:43.807574  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt with IP's: []
	I1006 14:44:43.994115  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt ...
	I1006 14:44:43.994149  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt: {Name:mk715c6902e25626016d7eb8fdb7b52f0fdce895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994338  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key ...
	I1006 14:44:43.994350  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key: {Name:mka438ddf42b96ca34511dda1ce60f08f1d48b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994429  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:44:43.994449  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:44:43.994460  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:44:43.994470  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:44:43.994480  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:44:43.994490  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:44:43.994510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:44:43.994522  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:44:43.994570  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:44:43.994617  682995 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:44:43.994630  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:44:43.994653  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:44:43.994674  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:44:43.994701  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:44:43.994739  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:43.994772  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:44:43.994786  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:43.994798  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:44:43.995423  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:44:44.014422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:44:44.032422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:44:44.050727  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:44:44.068490  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:44:44.085540  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:44:44.102941  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:44:44.121043  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:44:44.139583  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:44:44.159654  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:44:44.176939  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:44:44.194332  682995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:44:44.207641  682995 ssh_runner.go:195] Run: openssl version
	I1006 14:44:44.214349  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:44:44.223426  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227339  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227401  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.261578  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:44:44.270472  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:44:44.279083  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282749  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282813  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.316484  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:44:44.325228  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:44:44.334098  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.337988  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.338051  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.371914  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:44:44.380847  682995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:44:44.384643  682995 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:44:44.384694  682995 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:44:44.384758  682995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:44:44.384823  682995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:44:44.413083  682995 cri.go:89] found id: ""
	I1006 14:44:44.413145  682995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:44:44.421446  682995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:44:44.429380  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:44:44.429431  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:44:44.437643  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:44:44.437667  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:44:44.437726  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:44:44.445948  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:44:44.446021  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:44:44.453451  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:44:44.460986  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:44:44.461064  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:44:44.468259  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.475830  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:44:44.475882  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.483080  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:44:44.490569  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:44:44.490632  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:44:44.498056  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:44:44.560210  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:44:44.618315  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:48:49.762009  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:48:49.762136  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:48:49.765019  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:49.765065  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:49.765142  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:49.765192  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:49.765263  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:49.765329  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:49.765384  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:49.765424  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:49.765465  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:49.765507  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:49.765557  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:49.765644  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:49.765713  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:49.765816  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:49.765897  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:49.765974  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:49.766033  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:49.768189  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:49.768304  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:49.768391  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:49.768495  682995 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:48:49.768546  682995 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:48:49.768600  682995 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:48:49.768641  682995 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:48:49.768684  682995 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:48:49.768778  682995 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.768847  682995 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:48:49.768982  682995 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.769042  682995 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:48:49.769108  682995 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:48:49.769166  682995 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:48:49.769263  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:49.769339  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:49.769416  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:49.769489  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:49.769549  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:49.769601  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:49.769671  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:49.769753  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:49.771489  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:49.771577  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:49.771664  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:49.771742  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:49.771858  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:49.771974  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:49.772108  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:49.772220  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:49.772288  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:49.772413  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:49.772556  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:49.772647  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501252368s
	I1006 14:48:49.772772  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:49.772891  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:49.772971  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:49.773033  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:48:49.773108  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	I1006 14:48:49.773189  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	I1006 14:48:49.773304  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	I1006 14:48:49.773319  682995 kubeadm.go:318] 
	I1006 14:48:49.773407  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:48:49.773472  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:48:49.773545  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:48:49.773657  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:48:49.773771  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:48:49.773850  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:48:49.773891  682995 kubeadm.go:318] 
	W1006 14:48:49.774048  682995 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501252368s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:48:49.774147  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:48:52.524900  682995 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.75072398s)
	I1006 14:48:52.524985  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:48:52.538104  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:48:52.538173  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:48:52.546610  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:48:52.546639  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:48:52.546692  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:48:52.555271  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:48:52.555334  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:48:52.564502  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:48:52.572861  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:48:52.572925  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:48:52.580681  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.588574  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:48:52.588636  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.596314  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:48:52.604007  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:48:52.604073  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:48:52.611967  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:48:52.650794  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:52.650844  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:52.671446  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:52.671559  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:52.671628  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:52.671718  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:52.671766  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:52.671811  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:52.671850  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:52.671890  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:52.671928  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:52.671972  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:52.672010  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:52.732758  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:52.732876  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:52.732979  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:52.739914  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:52.743428  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:52.743535  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:52.743654  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:52.743727  682995 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:48:52.743777  682995 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:48:52.743861  682995 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:48:52.743911  682995 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:48:52.743985  682995 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:48:52.744055  682995 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:48:52.744143  682995 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:48:52.744228  682995 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:48:52.744266  682995 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:48:52.744323  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:53.107297  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:53.300701  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:53.503166  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:53.664024  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:53.725865  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:53.726293  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:53.728797  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:53.730586  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:53.730720  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:53.730830  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:53.730903  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:53.744534  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:53.744672  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:53.752267  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:53.752422  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:53.752505  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:53.852049  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:53.852226  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:54.353729  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.825241ms
	I1006 14:48:54.356542  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:54.356619  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:54.356695  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:54.356819  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:52:54.358331  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	I1006 14:52:54.358653  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	I1006 14:52:54.358853  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	I1006 14:52:54.358881  682995 kubeadm.go:318] 
	I1006 14:52:54.359059  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:52:54.359298  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:52:54.359539  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:52:54.359760  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:52:54.359952  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:52:54.360116  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:52:54.360148  682995 kubeadm.go:318] 
	I1006 14:52:54.363033  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:52:54.363163  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:52:54.363696  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:52:54.363761  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:52:54.363858  682995 kubeadm.go:402] duration metric: took 8m9.979166519s to StartCluster
	I1006 14:52:54.363946  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:52:54.364031  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:52:54.392579  682995 cri.go:89] found id: ""
	I1006 14:52:54.392622  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.392631  682995 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:52:54.392638  682995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:52:54.392693  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:52:54.420188  682995 cri.go:89] found id: ""
	I1006 14:52:54.420226  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.420237  682995 logs.go:284] No container was found matching "etcd"
	I1006 14:52:54.420245  682995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:52:54.420299  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:52:54.445694  682995 cri.go:89] found id: ""
	I1006 14:52:54.445723  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.445733  682995 logs.go:284] No container was found matching "coredns"
	I1006 14:52:54.445740  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:52:54.445791  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:52:54.471923  682995 cri.go:89] found id: ""
	I1006 14:52:54.471954  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.471962  682995 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:52:54.471971  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:52:54.472030  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:52:54.498805  682995 cri.go:89] found id: ""
	I1006 14:52:54.498836  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.498848  682995 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:52:54.498857  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:52:54.498922  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:52:54.524613  682995 cri.go:89] found id: ""
	I1006 14:52:54.524638  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.524646  682995 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:52:54.524652  682995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:52:54.524708  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:52:54.551140  682995 cri.go:89] found id: ""
	I1006 14:52:54.551170  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.551181  682995 logs.go:284] No container was found matching "kindnet"
	I1006 14:52:54.551194  682995 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:52:54.551220  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:52:54.615573  682995 logs.go:123] Gathering logs for container status ...
	I1006 14:52:54.615607  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:52:54.645703  682995 logs.go:123] Gathering logs for kubelet ...
	I1006 14:52:54.645732  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:52:54.709506  682995 logs.go:123] Gathering logs for dmesg ...
	I1006 14:52:54.709543  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:52:54.722963  682995 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:52:54.722997  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:52:54.783016  682995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1006 14:52:54.783054  682995 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:52:54.783107  682995 out.go:285] * 
	W1006 14:52:54.783182  682995 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.783200  682995 out.go:285] * 
	W1006 14:52:54.785658  682995 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:52:54.789273  682995 out.go:203] 
	W1006 14:52:54.790573  682995 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.790604  682995 out.go:285] * 
	I1006 14:52:54.791821  682995 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:52:46 ha-481559 crio[777]: time="2025-10-06T14:52:46.24531553Z" level=info msg="createCtr: removing container 009abe713b25ad24021f56f3b2b6f239f4463399320ac4025a811ee58fb11a96" id=690ce400-d8f0-4025-8d1f-23e204c5578e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:52:46 ha-481559 crio[777]: time="2025-10-06T14:52:46.245343363Z" level=info msg="createCtr: deleting container 009abe713b25ad24021f56f3b2b6f239f4463399320ac4025a811ee58fb11a96 from storage" id=690ce400-d8f0-4025-8d1f-23e204c5578e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:52:46 ha-481559 crio[777]: time="2025-10-06T14:52:46.247302652Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-481559_kube-system_5f3181798721fe8691d871f051785efc_0" id=690ce400-d8f0-4025-8d1f-23e204c5578e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:52:47 ha-481559 crio[777]: time="2025-10-06T14:52:47.222228204Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=b4a953c0-5b5e-49a4-b149-e91f85b4f53c name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:52:47 ha-481559 crio[777]: time="2025-10-06T14:52:47.223093749Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=642b5bdf-0bd4-45c7-a1e8-1acf089238e4 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:52:47 ha-481559 crio[777]: time="2025-10-06T14:52:47.22397934Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-481559/kube-scheduler" id=3cad2771-59fd-444f-92ec-8ce6c885a8f1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:52:47 ha-481559 crio[777]: time="2025-10-06T14:52:47.224242461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:52:47 ha-481559 crio[777]: time="2025-10-06T14:52:47.227573635Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:52:47 ha-481559 crio[777]: time="2025-10-06T14:52:47.228136732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:52:47 ha-481559 crio[777]: time="2025-10-06T14:52:47.242109903Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=3cad2771-59fd-444f-92ec-8ce6c885a8f1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:52:47 ha-481559 crio[777]: time="2025-10-06T14:52:47.243474019Z" level=info msg="createCtr: deleting container ID 4b69598d0642ec4167ba9125a3ef933fb82212282c85ddb929fc4b675212ef7b from idIndex" id=3cad2771-59fd-444f-92ec-8ce6c885a8f1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:52:47 ha-481559 crio[777]: time="2025-10-06T14:52:47.243507789Z" level=info msg="createCtr: removing container 4b69598d0642ec4167ba9125a3ef933fb82212282c85ddb929fc4b675212ef7b" id=3cad2771-59fd-444f-92ec-8ce6c885a8f1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:52:47 ha-481559 crio[777]: time="2025-10-06T14:52:47.243540821Z" level=info msg="createCtr: deleting container 4b69598d0642ec4167ba9125a3ef933fb82212282c85ddb929fc4b675212ef7b from storage" id=3cad2771-59fd-444f-92ec-8ce6c885a8f1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:52:47 ha-481559 crio[777]: time="2025-10-06T14:52:47.245706887Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=3cad2771-59fd-444f-92ec-8ce6c885a8f1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:52:52 ha-481559 crio[777]: time="2025-10-06T14:52:52.221317978Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=caf47064-b756-417a-b4c5-dc71614b9cd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:52:52 ha-481559 crio[777]: time="2025-10-06T14:52:52.222276859Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b9a3f4ed-7a9f-4ec4-9c2a-81b8c1bdde68 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:52:52 ha-481559 crio[777]: time="2025-10-06T14:52:52.223181805Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-481559/kube-apiserver" id=e2fe9733-e64c-4ca8-a099-9a52ef1c0689 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:52:52 ha-481559 crio[777]: time="2025-10-06T14:52:52.223442798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:52:52 ha-481559 crio[777]: time="2025-10-06T14:52:52.226843083Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:52:52 ha-481559 crio[777]: time="2025-10-06T14:52:52.227269542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:52:52 ha-481559 crio[777]: time="2025-10-06T14:52:52.246326878Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e2fe9733-e64c-4ca8-a099-9a52ef1c0689 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:52:52 ha-481559 crio[777]: time="2025-10-06T14:52:52.24774785Z" level=info msg="createCtr: deleting container ID b39bb07a24cca12f3d2fde406b5ae34e6a11ee3e01dbdfd3507a391fb4fe7e28 from idIndex" id=e2fe9733-e64c-4ca8-a099-9a52ef1c0689 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:52:52 ha-481559 crio[777]: time="2025-10-06T14:52:52.247782815Z" level=info msg="createCtr: removing container b39bb07a24cca12f3d2fde406b5ae34e6a11ee3e01dbdfd3507a391fb4fe7e28" id=e2fe9733-e64c-4ca8-a099-9a52ef1c0689 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:52:52 ha-481559 crio[777]: time="2025-10-06T14:52:52.247820846Z" level=info msg="createCtr: deleting container b39bb07a24cca12f3d2fde406b5ae34e6a11ee3e01dbdfd3507a391fb4fe7e28 from storage" id=e2fe9733-e64c-4ca8-a099-9a52ef1c0689 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:52:52 ha-481559 crio[777]: time="2025-10-06T14:52:52.249899504Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-481559_kube-system_b4e1cca8a09d3789a7e0862428dfe0db_0" id=e2fe9733-e64c-4ca8-a099-9a52ef1c0689 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:52:55.715816    2742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:55.716639    2742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:55.718246    2742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:55.718641    2742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:55.720148    2742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:52:55 up  5:35,  0 user,  load average: 0.00, 0.04, 0.15
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:52:46 ha-481559 kubelet[1985]: E1006 14:52:46.247698    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-481559" podUID="5f3181798721fe8691d871f051785efc"
	Oct 06 14:52:47 ha-481559 kubelet[1985]: E1006 14:52:47.221735    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:52:47 ha-481559 kubelet[1985]: E1006 14:52:47.245984    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:52:47 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:52:47 ha-481559 kubelet[1985]:  > podSandboxID="28815a6c32deaa458111079bbac61f47b8e22f338f2282fab7d62077c8b07f1e"
	Oct 06 14:52:47 ha-481559 kubelet[1985]: E1006 14:52:47.246089    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:52:47 ha-481559 kubelet[1985]:         container kube-scheduler start failed in pod kube-scheduler-ha-481559_kube-system(cc93cb8d89afaa943672c70952b45174): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:52:47 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:52:47 ha-481559 kubelet[1985]: E1006 14:52:47.246119    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-481559" podUID="cc93cb8d89afaa943672c70952b45174"
	Oct 06 14:52:50 ha-481559 kubelet[1985]: E1006 14:52:50.149886    1985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bee56630f4bc0  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-481559 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 14:48:54.214855616 +0000 UTC m=+0.361984785,LastTimestamp:2025-10-06 14:48:54.214855616 +0000 UTC m=+0.361984785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 14:52:50 ha-481559 kubelet[1985]: E1006 14:52:50.845785    1985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 14:52:51 ha-481559 kubelet[1985]: I1006 14:52:51.009309    1985 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 14:52:51 ha-481559 kubelet[1985]: E1006 14:52:51.009743    1985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 14:52:52 ha-481559 kubelet[1985]: E1006 14:52:52.220839    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:52:52 ha-481559 kubelet[1985]: E1006 14:52:52.250264    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:52:52 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:52:52 ha-481559 kubelet[1985]:  > podSandboxID="cadd804367d6dcdba2fb49fe06e3c1db8b35e6ee5c505328925ae346d4cdb867"
	Oct 06 14:52:52 ha-481559 kubelet[1985]: E1006 14:52:52.250392    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:52:52 ha-481559 kubelet[1985]:         container kube-apiserver start failed in pod kube-apiserver-ha-481559_kube-system(b4e1cca8a09d3789a7e0862428dfe0db): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:52:52 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:52:52 ha-481559 kubelet[1985]: E1006 14:52:52.250423    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-481559" podUID="b4e1cca8a09d3789a7e0862428dfe0db"
	Oct 06 14:52:52 ha-481559 kubelet[1985]: E1006 14:52:52.805724    1985 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 06 14:52:54 ha-481559 kubelet[1985]: E1006 14:52:54.150884    1985 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 06 14:52:54 ha-481559 kubelet[1985]: E1006 14:52:54.239571    1985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	Oct 06 14:52:54 ha-481559 kubelet[1985]: E1006 14:52:54.787783    1985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-481559&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 6 (294.391419ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:52:56.089823  688531 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (501.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (93.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (95.418641ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-481559" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- rollout status deployment/busybox: exit status 1 (92.16458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (91.29526ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1006 14:52:56.384314  629719 retry.go:31] will retry after 516.464913ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.388572ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1006 14:52:56.995599  629719 retry.go:31] will retry after 915.865747ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.313629ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1006 14:52:58.004128  629719 retry.go:31] will retry after 2.340515763s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (96.124748ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1006 14:53:00.442916  629719 retry.go:31] will retry after 3.168821702s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.757996ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1006 14:53:03.707137  629719 retry.go:31] will retry after 5.261829617s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (93.815565ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1006 14:53:09.063469  629719 retry.go:31] will retry after 4.759319951s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.992489ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1006 14:53:13.925160  629719 retry.go:31] will retry after 12.46552666s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.631337ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1006 14:53:26.489145  629719 retry.go:31] will retry after 16.789621355s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.382829ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1006 14:53:43.373602  629719 retry.go:31] will retry after 21.319972003s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.123895ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1006 14:54:04.792521  629719 retry.go:31] will retry after 22.662815452s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (96.057642ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (92.789397ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- exec  -- nslookup kubernetes.io: exit status 1 (93.22885ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- exec  -- nslookup kubernetes.default: exit status 1 (93.946901ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (93.721653ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 683567,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:44:39.660699919Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7effae92997970d320561b0b86c210815b18a55d65bd555e1bff50158ed38adc",
	            "SandboxKey": "/var/run/docker/netns/7effae929979",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32883"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32884"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:f3:45:3f:5b:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "b8540561692606ad815fcdb4502c1e36a16141413d3697f4cf48668502930e4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 6 (290.808354ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:28.225094  689505 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-135520 image ls --format json --alsologtostderr                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image ls --format table --alsologtostderr                                                     │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image ls --format yaml --alsologtostderr                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh pgrep buildkitd                                                                           │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image   │ functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image ls                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ delete  │ -p functional-135520                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │ 06 Oct 25 14:44 UTC │
	│ start   │ ha-481559 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- rollout status deployment/busybox                                                          │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:44:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:44:34.230587  682995 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:44:34.230719  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230728  682995 out.go:374] Setting ErrFile to fd 2...
	I1006 14:44:34.230733  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230969  682995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:44:34.231523  682995 out.go:368] Setting JSON to false
	I1006 14:44:34.232538  682995 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19610,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:44:34.232651  682995 start.go:140] virtualization: kvm guest
	I1006 14:44:34.235278  682995 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:44:34.236668  682995 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:44:34.236708  682995 notify.go:220] Checking for updates...
	I1006 14:44:34.239256  682995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:44:34.240475  682995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:44:34.242249  682995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:44:34.243577  682995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:44:34.244737  682995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:44:34.246267  682995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:44:34.271626  682995 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:44:34.271783  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.334697  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.323928193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.334819  682995 docker.go:318] overlay module found
	I1006 14:44:34.336770  682995 out.go:179] * Using the docker driver based on user configuration
	I1006 14:44:34.338109  682995 start.go:304] selected driver: docker
	I1006 14:44:34.338130  682995 start.go:924] validating driver "docker" against <nil>
	I1006 14:44:34.338144  682995 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:44:34.338750  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.398314  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.387376197 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.398587  682995 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:44:34.399080  682995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:44:34.401095  682995 out.go:179] * Using Docker driver with root privileges
	I1006 14:44:34.402283  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:34.402367  682995 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1006 14:44:34.402383  682995 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 14:44:34.402476  682995 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1006 14:44:34.403829  682995 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:44:34.404899  682995 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:44:34.406166  682995 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:44:34.407227  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.407272  682995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:44:34.407284  682995 cache.go:58] Caching tarball of preloaded images
	I1006 14:44:34.407376  682995 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:44:34.407382  682995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:44:34.407387  682995 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:44:34.407757  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:34.407793  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json: {Name:mkefd90ec0b9eae63c82d60bab053cdf7b5d9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:34.429193  682995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:44:34.429233  682995 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:44:34.429254  682995 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:44:34.429296  682995 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:44:34.429397  682995 start.go:364] duration metric: took 84.055µs to acquireMachinesLock for "ha-481559"
	I1006 14:44:34.429421  682995 start.go:93] Provisioning new machine with config: &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:44:34.429503  682995 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:44:34.431456  682995 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 14:44:34.431692  682995 start.go:159] libmachine.API.Create for "ha-481559" (driver="docker")
	I1006 14:44:34.431725  682995 client.go:168] LocalClient.Create starting
	I1006 14:44:34.431791  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem
	I1006 14:44:34.431825  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431843  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.431939  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem
	I1006 14:44:34.431977  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431994  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.432416  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:44:34.449965  682995 cli_runner.go:211] docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:44:34.450053  682995 network_create.go:284] running [docker network inspect ha-481559] to gather additional debugging logs...
	I1006 14:44:34.450071  682995 cli_runner.go:164] Run: docker network inspect ha-481559
	W1006 14:44:34.468682  682995 cli_runner.go:211] docker network inspect ha-481559 returned with exit code 1
	I1006 14:44:34.468713  682995 network_create.go:287] error running [docker network inspect ha-481559]: docker network inspect ha-481559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-481559 not found
	I1006 14:44:34.468724  682995 network_create.go:289] output of [docker network inspect ha-481559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-481559 not found
	
	** /stderr **
	I1006 14:44:34.468902  682995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:34.488223  682995 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca2540}
	I1006 14:44:34.488276  682995 network_create.go:124] attempt to create docker network ha-481559 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:44:34.488338  682995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-481559 ha-481559
	I1006 14:44:34.548630  682995 network_create.go:108] docker network ha-481559 192.168.49.0/24 created
	I1006 14:44:34.548669  682995 kic.go:121] calculated static IP "192.168.49.2" for the "ha-481559" container
	I1006 14:44:34.548729  682995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:44:34.566959  682995 cli_runner.go:164] Run: docker volume create ha-481559 --label name.minikube.sigs.k8s.io=ha-481559 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:44:34.586001  682995 oci.go:103] Successfully created a docker volume ha-481559
	I1006 14:44:34.586088  682995 cli_runner.go:164] Run: docker run --rm --name ha-481559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --entrypoint /usr/bin/test -v ha-481559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:44:34.994169  682995 oci.go:107] Successfully prepared a docker volume ha-481559
	I1006 14:44:34.994233  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.994280  682995 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:44:34.994349  682995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:44:39.551248  682995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.556814521s)
	I1006 14:44:39.551287  682995 kic.go:203] duration metric: took 4.557022471s to extract preloaded images to volume ...
	W1006 14:44:39.551374  682995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1006 14:44:39.551406  682995 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1006 14:44:39.551451  682995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:44:39.608040  682995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-481559 --name ha-481559 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-481559 --network ha-481559 --ip 192.168.49.2 --volume ha-481559:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:44:39.865946  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Running}}
	I1006 14:44:39.883061  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:39.901066  682995 cli_runner.go:164] Run: docker exec ha-481559 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:44:39.951869  682995 oci.go:144] the created container "ha-481559" has a running status.
	I1006 14:44:39.951908  682995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa...
	I1006 14:44:40.176341  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 14:44:40.176392  682995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:44:40.205643  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.227924  682995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:44:40.227948  682995 kic_runner.go:114] Args: [docker exec --privileged ha-481559 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:44:40.277808  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.297063  682995 machine.go:93] provisionDockerMachine start ...
	I1006 14:44:40.297156  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.315828  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.316109  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.316124  682995 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:44:40.461735  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.461771  682995 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:44:40.461843  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.481222  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.481551  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.481575  682995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:44:40.636624  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.636709  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.655017  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.655283  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.655302  682995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:44:40.801276  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:44:40.801313  682995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:44:40.801332  682995 ubuntu.go:190] setting up certificates
	I1006 14:44:40.801344  682995 provision.go:84] configureAuth start
	I1006 14:44:40.801398  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:40.819000  682995 provision.go:143] copyHostCerts
	I1006 14:44:40.819052  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819089  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:44:40.819099  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819169  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:44:40.819281  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819304  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:44:40.819309  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819338  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:44:40.819400  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819416  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:44:40.819428  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819460  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:44:40.819525  682995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:44:40.896257  682995 provision.go:177] copyRemoteCerts
	I1006 14:44:40.896328  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:44:40.896370  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.914092  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.016898  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:44:41.016969  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:44:41.037131  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:44:41.037215  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:44:41.055180  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:44:41.055258  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:44:41.073045  682995 provision.go:87] duration metric: took 271.684433ms to configureAuth
	I1006 14:44:41.073074  682995 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:44:41.073312  682995 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:44:41.073456  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.092548  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:41.092838  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:41.092869  682995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:44:41.356221  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:44:41.356247  682995 machine.go:96] duration metric: took 1.059160507s to provisionDockerMachine
	I1006 14:44:41.356259  682995 client.go:171] duration metric: took 6.924524382s to LocalClient.Create
	I1006 14:44:41.356282  682995 start.go:167] duration metric: took 6.924591304s to libmachine.API.Create "ha-481559"
	I1006 14:44:41.356295  682995 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:44:41.356322  682995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:44:41.356396  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:44:41.356453  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.374424  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.479545  682995 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:44:41.483318  682995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:44:41.483345  682995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:44:41.483356  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:44:41.483402  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:44:41.483499  682995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:44:41.483510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:44:41.483603  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:44:41.491409  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:41.511609  682995 start.go:296] duration metric: took 155.29938ms for postStartSetup
	I1006 14:44:41.511914  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.529867  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:41.530158  682995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:44:41.530223  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.547995  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.647810  682995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:44:41.652637  682995 start.go:128] duration metric: took 7.223117194s to createHost
	I1006 14:44:41.652662  682995 start.go:83] releasing machines lock for "ha-481559", held for 7.223254897s
	I1006 14:44:41.652730  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.670486  682995 ssh_runner.go:195] Run: cat /version.json
	I1006 14:44:41.670511  682995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:44:41.670555  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.670581  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.689278  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.689801  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.845142  682995 ssh_runner.go:195] Run: systemctl --version
	I1006 14:44:41.852333  682995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:44:41.886799  682995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:44:41.891575  682995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:44:41.891645  682995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:44:41.918020  682995 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 14:44:41.918049  682995 start.go:495] detecting cgroup driver to use...
	I1006 14:44:41.918088  682995 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:44:41.918148  682995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:44:41.934827  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:44:41.946573  682995 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:44:41.946626  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:44:41.961811  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:44:41.978333  682995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:44:42.056893  682995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:44:42.140645  682995 docker.go:234] disabling docker service ...
	I1006 14:44:42.140713  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:44:42.159372  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:44:42.171857  682995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:44:42.255908  682995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:44:42.340081  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:44:42.352916  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:44:42.367142  682995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:44:42.367215  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.377866  682995 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:44:42.377939  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.387157  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.395944  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.404768  682995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:44:42.412712  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.420910  682995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.434108  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.442895  682995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:44:42.450289  682995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:44:42.457667  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:42.535385  682995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:44:42.643348  682995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:44:42.643424  682995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:44:42.647404  682995 start.go:563] Will wait 60s for crictl version
	I1006 14:44:42.647467  682995 ssh_runner.go:195] Run: which crictl
	I1006 14:44:42.651000  682995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:44:42.675962  682995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:44:42.676044  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.705541  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.736773  682995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:44:42.738090  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:42.754892  682995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:44:42.759274  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.770415  682995 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:44:42.770534  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:42.770581  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.805187  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.805221  682995 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:44:42.805274  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.831096  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.831123  682995 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:44:42.831132  682995 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:44:42.831244  682995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:44:42.831321  682995 ssh_runner.go:195] Run: crio config
	I1006 14:44:42.877768  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:42.877790  682995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:44:42.877819  682995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:44:42.877840  682995 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:44:42.877966  682995 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:44:42.877993  682995 kube-vip.go:115] generating kube-vip config ...
	I1006 14:44:42.878035  682995 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1006 14:44:42.890886  682995 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:44:42.890995  682995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1006 14:44:42.891046  682995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:44:42.899063  682995 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:44:42.899132  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1006 14:44:42.906926  682995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:44:42.919358  682995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:44:42.934141  682995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:44:42.945961  682995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1006 14:44:42.959489  682995 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1006 14:44:42.962953  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.972760  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:43.053996  682995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:44:43.077665  682995 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:44:43.077692  682995 certs.go:195] generating shared ca certs ...
	I1006 14:44:43.077714  682995 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.077856  682995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:44:43.077899  682995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:44:43.077909  682995 certs.go:257] generating profile certs ...
	I1006 14:44:43.077963  682995 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:44:43.077983  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt with IP's: []
	I1006 14:44:43.259387  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt ...
	I1006 14:44:43.259418  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt: {Name:mk058803c7a7f0f2aa3fb547a3aafbba9518c3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.259607  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key ...
	I1006 14:44:43.259619  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key: {Name:mk0ae3492597f7c1edf0d7262770452fa244a40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.265151  682995 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710
	I1006 14:44:43.265175  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1006 14:44:43.807062  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 ...
	I1006 14:44:43.807095  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710: {Name:mk30dd14f07a4b732bb60853cc2fd5f84f73e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807283  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 ...
	I1006 14:44:43.807298  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710: {Name:mkf3f5fbdf7957143c03cb611320a2e02acb94c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807374  682995 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:44:43.807489  682995 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:44:43.807558  682995 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:44:43.807574  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt with IP's: []
	I1006 14:44:43.994115  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt ...
	I1006 14:44:43.994149  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt: {Name:mk715c6902e25626016d7eb8fdb7b52f0fdce895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994338  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key ...
	I1006 14:44:43.994350  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key: {Name:mka438ddf42b96ca34511dda1ce60f08f1d48b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994429  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:44:43.994449  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:44:43.994460  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:44:43.994470  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:44:43.994480  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:44:43.994490  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:44:43.994510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:44:43.994522  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:44:43.994570  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:44:43.994617  682995 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:44:43.994630  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:44:43.994653  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:44:43.994674  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:44:43.994701  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:44:43.994739  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:43.994772  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:44:43.994786  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:43.994798  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:44:43.995423  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:44:44.014422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:44:44.032422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:44:44.050727  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:44:44.068490  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:44:44.085540  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:44:44.102941  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:44:44.121043  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:44:44.139583  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:44:44.159654  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:44:44.176939  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:44:44.194332  682995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:44:44.207641  682995 ssh_runner.go:195] Run: openssl version
	I1006 14:44:44.214349  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:44:44.223426  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227339  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227401  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.261578  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:44:44.270472  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:44:44.279083  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282749  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282813  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.316484  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:44:44.325228  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:44:44.334098  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.337988  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.338051  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.371914  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:44:44.380847  682995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:44:44.384643  682995 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:44:44.384694  682995 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:44:44.384758  682995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:44:44.384823  682995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:44:44.413083  682995 cri.go:89] found id: ""
	I1006 14:44:44.413145  682995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:44:44.421446  682995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:44:44.429380  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:44:44.429431  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:44:44.437643  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:44:44.437667  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:44:44.437726  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:44:44.445948  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:44:44.446021  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:44:44.453451  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:44:44.460986  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:44:44.461064  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:44:44.468259  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.475830  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:44:44.475882  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.483080  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:44:44.490569  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:44:44.490632  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:44:44.498056  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:44:44.560210  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:44:44.618315  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:48:49.762009  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:48:49.762136  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:48:49.765019  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:49.765065  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:49.765142  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:49.765192  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:49.765263  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:49.765329  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:49.765384  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:49.765424  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:49.765465  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:49.765507  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:49.765557  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:49.765644  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:49.765713  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:49.765816  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:49.765897  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:49.765974  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:49.766033  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:49.768189  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:49.768304  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:49.768391  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:49.768495  682995 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:48:49.768546  682995 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:48:49.768600  682995 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:48:49.768641  682995 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:48:49.768684  682995 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:48:49.768778  682995 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.768847  682995 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:48:49.768982  682995 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.769042  682995 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:48:49.769108  682995 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:48:49.769166  682995 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:48:49.769263  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:49.769339  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:49.769416  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:49.769489  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:49.769549  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:49.769601  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:49.769671  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:49.769753  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:49.771489  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:49.771577  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:49.771664  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:49.771742  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:49.771858  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:49.771974  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:49.772108  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:49.772220  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:49.772288  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:49.772413  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:49.772556  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:49.772647  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501252368s
	I1006 14:48:49.772772  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:49.772891  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:49.772971  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:49.773033  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:48:49.773108  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	I1006 14:48:49.773189  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	I1006 14:48:49.773304  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	I1006 14:48:49.773319  682995 kubeadm.go:318] 
	I1006 14:48:49.773407  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:48:49.773472  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:48:49.773545  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:48:49.773657  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:48:49.773771  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:48:49.773850  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:48:49.773891  682995 kubeadm.go:318] 
	W1006 14:48:49.774048  682995 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501252368s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:48:49.774147  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:48:52.524900  682995 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.75072398s)
	I1006 14:48:52.524985  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:48:52.538104  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:48:52.538173  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:48:52.546610  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:48:52.546639  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:48:52.546692  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:48:52.555271  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:48:52.555334  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:48:52.564502  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:48:52.572861  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:48:52.572925  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:48:52.580681  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.588574  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:48:52.588636  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.596314  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:48:52.604007  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:48:52.604073  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:48:52.611967  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:48:52.650794  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:52.650844  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:52.671446  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:52.671559  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:52.671628  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:52.671718  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:52.671766  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:52.671811  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:52.671850  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:52.671890  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:52.671928  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:52.671972  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:52.672010  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:52.732758  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:52.732876  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:52.732979  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:52.739914  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:52.743428  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:52.743535  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:52.743654  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:52.743727  682995 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:48:52.743777  682995 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:48:52.743861  682995 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:48:52.743911  682995 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:48:52.743985  682995 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:48:52.744055  682995 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:48:52.744143  682995 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:48:52.744228  682995 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:48:52.744266  682995 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:48:52.744323  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:53.107297  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:53.300701  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:53.503166  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:53.664024  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:53.725865  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:53.726293  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:53.728797  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:53.730586  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:53.730720  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:53.730830  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:53.730903  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:53.744534  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:53.744672  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:53.752267  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:53.752422  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:53.752505  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:53.852049  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:53.852226  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:54.353729  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.825241ms
	I1006 14:48:54.356542  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:54.356619  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:54.356695  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:54.356819  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:52:54.358331  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	I1006 14:52:54.358653  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	I1006 14:52:54.358853  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	I1006 14:52:54.358881  682995 kubeadm.go:318] 
	I1006 14:52:54.359059  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:52:54.359298  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:52:54.359539  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:52:54.359760  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:52:54.359952  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:52:54.360116  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:52:54.360148  682995 kubeadm.go:318] 
	I1006 14:52:54.363033  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:52:54.363163  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:52:54.363696  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:52:54.363761  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:52:54.363858  682995 kubeadm.go:402] duration metric: took 8m9.979166519s to StartCluster
	I1006 14:52:54.363946  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:52:54.364031  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:52:54.392579  682995 cri.go:89] found id: ""
	I1006 14:52:54.392622  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.392631  682995 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:52:54.392638  682995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:52:54.392693  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:52:54.420188  682995 cri.go:89] found id: ""
	I1006 14:52:54.420226  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.420237  682995 logs.go:284] No container was found matching "etcd"
	I1006 14:52:54.420245  682995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:52:54.420299  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:52:54.445694  682995 cri.go:89] found id: ""
	I1006 14:52:54.445723  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.445733  682995 logs.go:284] No container was found matching "coredns"
	I1006 14:52:54.445740  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:52:54.445791  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:52:54.471923  682995 cri.go:89] found id: ""
	I1006 14:52:54.471954  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.471962  682995 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:52:54.471971  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:52:54.472030  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:52:54.498805  682995 cri.go:89] found id: ""
	I1006 14:52:54.498836  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.498848  682995 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:52:54.498857  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:52:54.498922  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:52:54.524613  682995 cri.go:89] found id: ""
	I1006 14:52:54.524638  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.524646  682995 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:52:54.524652  682995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:52:54.524708  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:52:54.551140  682995 cri.go:89] found id: ""
	I1006 14:52:54.551170  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.551181  682995 logs.go:284] No container was found matching "kindnet"
	I1006 14:52:54.551194  682995 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:52:54.551220  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:52:54.615573  682995 logs.go:123] Gathering logs for container status ...
	I1006 14:52:54.615607  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:52:54.645703  682995 logs.go:123] Gathering logs for kubelet ...
	I1006 14:52:54.645732  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:52:54.709506  682995 logs.go:123] Gathering logs for dmesg ...
	I1006 14:52:54.709543  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:52:54.722963  682995 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:52:54.722997  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:52:54.783016  682995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1006 14:52:54.783054  682995 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:52:54.783107  682995 out.go:285] * 
	W1006 14:52:54.783182  682995 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.783200  682995 out.go:285] * 
	W1006 14:52:54.785658  682995 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:52:54.789273  682995 out.go:203] 
	W1006 14:52:54.790573  682995 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.790604  682995 out.go:285] * 
	I1006 14:52:54.791821  682995 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:54:18 ha-481559 crio[777]: time="2025-10-06T14:54:18.248470804Z" level=info msg="createCtr: removing container fa77408e15584543e6120d9368eb14b8934e9db4687dc2fbecb8a799e7bc2c8a" id=2cc177f4-c68b-41bb-854c-9b932542a4d9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:18 ha-481559 crio[777]: time="2025-10-06T14:54:18.248499912Z" level=info msg="createCtr: deleting container fa77408e15584543e6120d9368eb14b8934e9db4687dc2fbecb8a799e7bc2c8a from storage" id=2cc177f4-c68b-41bb-854c-9b932542a4d9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:18 ha-481559 crio[777]: time="2025-10-06T14:54:18.250376963Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=2cc177f4-c68b-41bb-854c-9b932542a4d9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.221669418Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d74faff1-6f9c-4b14-8742-e1b6f37f0a5e name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.222583794Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=bf74765d-5b49-40b2-947c-7b7de3af1367 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.223547801Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-481559/kube-apiserver" id=ea7eba65-7901-4b0a-8dc1-512a8ae40c08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.223752683Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.227430642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.227880877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.248638243Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ea7eba65-7901-4b0a-8dc1-512a8ae40c08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.250102327Z" level=info msg="createCtr: deleting container ID 44bd9526540da0c835e3df8165ad6d393d56f73b366013b65c8d81c87c72a71c from idIndex" id=ea7eba65-7901-4b0a-8dc1-512a8ae40c08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.250142803Z" level=info msg="createCtr: removing container 44bd9526540da0c835e3df8165ad6d393d56f73b366013b65c8d81c87c72a71c" id=ea7eba65-7901-4b0a-8dc1-512a8ae40c08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.250176316Z" level=info msg="createCtr: deleting container 44bd9526540da0c835e3df8165ad6d393d56f73b366013b65c8d81c87c72a71c from storage" id=ea7eba65-7901-4b0a-8dc1-512a8ae40c08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.252283198Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-481559_kube-system_b4e1cca8a09d3789a7e0862428dfe0db_0" id=ea7eba65-7901-4b0a-8dc1-512a8ae40c08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.221551253Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=17d78e94-e04d-498b-bc45-5d0791b6c8a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.222818644Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=47a2165f-a6f2-492d-9d1e-29dca5bcd8d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.223780089Z" level=info msg="Creating container: kube-system/etcd-ha-481559/etcd" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.224008001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.227534186Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.227929945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.244953858Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246382165Z" level=info msg="createCtr: deleting container ID c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485 from idIndex" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246426068Z" level=info msg="createCtr: removing container c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246465998Z" level=info msg="createCtr: deleting container c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485 from storage" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.249858456Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:54:28.795739    3077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:28.796296    3077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:28.797886    3077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:28.798329    3077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:28.799842    3077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:54:28 up  5:36,  0 user,  load average: 0.02, 0.04, 0.14
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:54:18 ha-481559 kubelet[1985]:         container kube-scheduler start failed in pod kube-scheduler-ha-481559_kube-system(cc93cb8d89afaa943672c70952b45174): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:18 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:18 ha-481559 kubelet[1985]: E1006 14:54:18.250822    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-481559" podUID="cc93cb8d89afaa943672c70952b45174"
	Oct 06 14:54:18 ha-481559 kubelet[1985]: E1006 14:54:18.620892    1985 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-481559&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 06 14:54:19 ha-481559 kubelet[1985]: E1006 14:54:19.037620    1985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bee56630f6256  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,LastTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 14:54:21 ha-481559 kubelet[1985]: E1006 14:54:21.860396    1985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 14:54:22 ha-481559 kubelet[1985]: I1006 14:54:22.037492    1985 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 14:54:22 ha-481559 kubelet[1985]: E1006 14:54:22.037882    1985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 14:54:24 ha-481559 kubelet[1985]: E1006 14:54:24.244954    1985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	Oct 06 14:54:27 ha-481559 kubelet[1985]: E1006 14:54:27.221162    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:27 ha-481559 kubelet[1985]: E1006 14:54:27.252615    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:27 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:27 ha-481559 kubelet[1985]:  > podSandboxID="cadd804367d6dcdba2fb49fe06e3c1db8b35e6ee5c505328925ae346d4cdb867"
	Oct 06 14:54:27 ha-481559 kubelet[1985]: E1006 14:54:27.252723    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:27 ha-481559 kubelet[1985]:         container kube-apiserver start failed in pod kube-apiserver-ha-481559_kube-system(b4e1cca8a09d3789a7e0862428dfe0db): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:27 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:27 ha-481559 kubelet[1985]: E1006 14:54:27.252753    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-481559" podUID="b4e1cca8a09d3789a7e0862428dfe0db"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.221034    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.250166    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:28 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:28 ha-481559 kubelet[1985]:  > podSandboxID="a7ce34bebe17bc556bee492a72e0243ebe86fdfcd40a6e28aafa4e286d225bc6"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.250298    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:28 ha-481559 kubelet[1985]:         container etcd start failed in pod etcd-ha-481559_kube-system(520c6060936b1c2aac479c99ed6c0355): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:28 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.250344    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 6 (297.668244ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:29.169698  689841 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (93.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (97.844398ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-481559"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 683567,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:44:39.660699919Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7effae92997970d320561b0b86c210815b18a55d65bd555e1bff50158ed38adc",
	            "SandboxKey": "/var/run/docker/netns/7effae929979",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32883"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32884"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:f3:45:3f:5b:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "b8540561692606ad815fcdb4502c1e36a16141413d3697f4cf48668502930e4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 6 (295.423654ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:29.580896  689983 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-135520 image ls --format table --alsologtostderr                                                     │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image ls --format yaml --alsologtostderr                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh pgrep buildkitd                                                                           │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image   │ functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image ls                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ delete  │ -p functional-135520                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │ 06 Oct 25 14:44 UTC │
	│ start   │ ha-481559 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- rollout status deployment/busybox                                                          │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:44:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:44:34.230587  682995 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:44:34.230719  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230728  682995 out.go:374] Setting ErrFile to fd 2...
	I1006 14:44:34.230733  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230969  682995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:44:34.231523  682995 out.go:368] Setting JSON to false
	I1006 14:44:34.232538  682995 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19610,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:44:34.232651  682995 start.go:140] virtualization: kvm guest
	I1006 14:44:34.235278  682995 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:44:34.236668  682995 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:44:34.236708  682995 notify.go:220] Checking for updates...
	I1006 14:44:34.239256  682995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:44:34.240475  682995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:44:34.242249  682995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:44:34.243577  682995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:44:34.244737  682995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:44:34.246267  682995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:44:34.271626  682995 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:44:34.271783  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.334697  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.323928193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.334819  682995 docker.go:318] overlay module found
	I1006 14:44:34.336770  682995 out.go:179] * Using the docker driver based on user configuration
	I1006 14:44:34.338109  682995 start.go:304] selected driver: docker
	I1006 14:44:34.338130  682995 start.go:924] validating driver "docker" against <nil>
	I1006 14:44:34.338144  682995 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:44:34.338750  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.398314  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.387376197 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.398587  682995 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:44:34.399080  682995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:44:34.401095  682995 out.go:179] * Using Docker driver with root privileges
	I1006 14:44:34.402283  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:34.402367  682995 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1006 14:44:34.402383  682995 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 14:44:34.402476  682995 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1006 14:44:34.403829  682995 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:44:34.404899  682995 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:44:34.406166  682995 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:44:34.407227  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.407272  682995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:44:34.407284  682995 cache.go:58] Caching tarball of preloaded images
	I1006 14:44:34.407376  682995 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:44:34.407382  682995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:44:34.407387  682995 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:44:34.407757  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:34.407793  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json: {Name:mkefd90ec0b9eae63c82d60bab053cdf7b5d9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:34.429193  682995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:44:34.429233  682995 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:44:34.429254  682995 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:44:34.429296  682995 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:44:34.429397  682995 start.go:364] duration metric: took 84.055µs to acquireMachinesLock for "ha-481559"
	I1006 14:44:34.429421  682995 start.go:93] Provisioning new machine with config: &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:44:34.429503  682995 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:44:34.431456  682995 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 14:44:34.431692  682995 start.go:159] libmachine.API.Create for "ha-481559" (driver="docker")
	I1006 14:44:34.431725  682995 client.go:168] LocalClient.Create starting
	I1006 14:44:34.431791  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem
	I1006 14:44:34.431825  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431843  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.431939  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem
	I1006 14:44:34.431977  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431994  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.432416  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:44:34.449965  682995 cli_runner.go:211] docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:44:34.450053  682995 network_create.go:284] running [docker network inspect ha-481559] to gather additional debugging logs...
	I1006 14:44:34.450071  682995 cli_runner.go:164] Run: docker network inspect ha-481559
	W1006 14:44:34.468682  682995 cli_runner.go:211] docker network inspect ha-481559 returned with exit code 1
	I1006 14:44:34.468713  682995 network_create.go:287] error running [docker network inspect ha-481559]: docker network inspect ha-481559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-481559 not found
	I1006 14:44:34.468724  682995 network_create.go:289] output of [docker network inspect ha-481559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-481559 not found
	
	** /stderr **
	I1006 14:44:34.468902  682995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:34.488223  682995 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca2540}
	I1006 14:44:34.488276  682995 network_create.go:124] attempt to create docker network ha-481559 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:44:34.488338  682995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-481559 ha-481559
	I1006 14:44:34.548630  682995 network_create.go:108] docker network ha-481559 192.168.49.0/24 created
	I1006 14:44:34.548669  682995 kic.go:121] calculated static IP "192.168.49.2" for the "ha-481559" container
	I1006 14:44:34.548729  682995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:44:34.566959  682995 cli_runner.go:164] Run: docker volume create ha-481559 --label name.minikube.sigs.k8s.io=ha-481559 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:44:34.586001  682995 oci.go:103] Successfully created a docker volume ha-481559
	I1006 14:44:34.586088  682995 cli_runner.go:164] Run: docker run --rm --name ha-481559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --entrypoint /usr/bin/test -v ha-481559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:44:34.994169  682995 oci.go:107] Successfully prepared a docker volume ha-481559
	I1006 14:44:34.994233  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.994280  682995 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:44:34.994349  682995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:44:39.551248  682995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.556814521s)
	I1006 14:44:39.551287  682995 kic.go:203] duration metric: took 4.557022471s to extract preloaded images to volume ...
	W1006 14:44:39.551374  682995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1006 14:44:39.551406  682995 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1006 14:44:39.551451  682995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:44:39.608040  682995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-481559 --name ha-481559 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-481559 --network ha-481559 --ip 192.168.49.2 --volume ha-481559:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:44:39.865946  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Running}}
	I1006 14:44:39.883061  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:39.901066  682995 cli_runner.go:164] Run: docker exec ha-481559 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:44:39.951869  682995 oci.go:144] the created container "ha-481559" has a running status.
	I1006 14:44:39.951908  682995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa...
	I1006 14:44:40.176341  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 14:44:40.176392  682995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:44:40.205643  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.227924  682995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:44:40.227948  682995 kic_runner.go:114] Args: [docker exec --privileged ha-481559 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:44:40.277808  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.297063  682995 machine.go:93] provisionDockerMachine start ...
	I1006 14:44:40.297156  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.315828  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.316109  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.316124  682995 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:44:40.461735  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.461771  682995 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:44:40.461843  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.481222  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.481551  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.481575  682995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:44:40.636624  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.636709  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.655017  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.655283  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.655302  682995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:44:40.801276  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:44:40.801313  682995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:44:40.801332  682995 ubuntu.go:190] setting up certificates
	I1006 14:44:40.801344  682995 provision.go:84] configureAuth start
	I1006 14:44:40.801398  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:40.819000  682995 provision.go:143] copyHostCerts
	I1006 14:44:40.819052  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819089  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:44:40.819099  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819169  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:44:40.819281  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819304  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:44:40.819309  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819338  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:44:40.819400  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819416  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:44:40.819428  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819460  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:44:40.819525  682995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:44:40.896257  682995 provision.go:177] copyRemoteCerts
	I1006 14:44:40.896328  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:44:40.896370  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.914092  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.016898  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:44:41.016969  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:44:41.037131  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:44:41.037215  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:44:41.055180  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:44:41.055258  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:44:41.073045  682995 provision.go:87] duration metric: took 271.684433ms to configureAuth
	I1006 14:44:41.073074  682995 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:44:41.073312  682995 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:44:41.073456  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.092548  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:41.092838  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:41.092869  682995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:44:41.356221  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:44:41.356247  682995 machine.go:96] duration metric: took 1.059160507s to provisionDockerMachine
	I1006 14:44:41.356259  682995 client.go:171] duration metric: took 6.924524382s to LocalClient.Create
	I1006 14:44:41.356282  682995 start.go:167] duration metric: took 6.924591304s to libmachine.API.Create "ha-481559"
	I1006 14:44:41.356295  682995 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:44:41.356322  682995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:44:41.356396  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:44:41.356453  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.374424  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.479545  682995 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:44:41.483318  682995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:44:41.483345  682995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:44:41.483356  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:44:41.483402  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:44:41.483499  682995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:44:41.483510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:44:41.483603  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:44:41.491409  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:41.511609  682995 start.go:296] duration metric: took 155.29938ms for postStartSetup
	I1006 14:44:41.511914  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.529867  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:41.530158  682995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:44:41.530223  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.547995  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.647810  682995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:44:41.652637  682995 start.go:128] duration metric: took 7.223117194s to createHost
	I1006 14:44:41.652662  682995 start.go:83] releasing machines lock for "ha-481559", held for 7.223254897s
	I1006 14:44:41.652730  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.670486  682995 ssh_runner.go:195] Run: cat /version.json
	I1006 14:44:41.670511  682995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:44:41.670555  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.670581  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.689278  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.689801  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.845142  682995 ssh_runner.go:195] Run: systemctl --version
	I1006 14:44:41.852333  682995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:44:41.886799  682995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:44:41.891575  682995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:44:41.891645  682995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:44:41.918020  682995 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 14:44:41.918049  682995 start.go:495] detecting cgroup driver to use...
	I1006 14:44:41.918088  682995 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:44:41.918148  682995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:44:41.934827  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:44:41.946573  682995 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:44:41.946626  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:44:41.961811  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:44:41.978333  682995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:44:42.056893  682995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:44:42.140645  682995 docker.go:234] disabling docker service ...
	I1006 14:44:42.140713  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:44:42.159372  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:44:42.171857  682995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:44:42.255908  682995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:44:42.340081  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:44:42.352916  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:44:42.367142  682995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:44:42.367215  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.377866  682995 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:44:42.377939  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.387157  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.395944  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.404768  682995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:44:42.412712  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.420910  682995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.434108  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.442895  682995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:44:42.450289  682995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:44:42.457667  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:42.535385  682995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:44:42.643348  682995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:44:42.643424  682995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:44:42.647404  682995 start.go:563] Will wait 60s for crictl version
	I1006 14:44:42.647467  682995 ssh_runner.go:195] Run: which crictl
	I1006 14:44:42.651000  682995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:44:42.675962  682995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:44:42.676044  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.705541  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.736773  682995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:44:42.738090  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:42.754892  682995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:44:42.759274  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.770415  682995 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:44:42.770534  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:42.770581  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.805187  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.805221  682995 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:44:42.805274  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.831096  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.831123  682995 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:44:42.831132  682995 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:44:42.831244  682995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:44:42.831321  682995 ssh_runner.go:195] Run: crio config
	I1006 14:44:42.877768  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:42.877790  682995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:44:42.877819  682995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:44:42.877840  682995 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:44:42.877966  682995 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:44:42.877993  682995 kube-vip.go:115] generating kube-vip config ...
	I1006 14:44:42.878035  682995 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1006 14:44:42.890886  682995 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:44:42.890995  682995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1006 14:44:42.891046  682995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:44:42.899063  682995 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:44:42.899132  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1006 14:44:42.906926  682995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:44:42.919358  682995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:44:42.934141  682995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:44:42.945961  682995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1006 14:44:42.959489  682995 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1006 14:44:42.962953  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.972760  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:43.053996  682995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:44:43.077665  682995 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:44:43.077692  682995 certs.go:195] generating shared ca certs ...
	I1006 14:44:43.077714  682995 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.077856  682995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:44:43.077899  682995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:44:43.077909  682995 certs.go:257] generating profile certs ...
	I1006 14:44:43.077963  682995 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:44:43.077983  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt with IP's: []
	I1006 14:44:43.259387  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt ...
	I1006 14:44:43.259418  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt: {Name:mk058803c7a7f0f2aa3fb547a3aafbba9518c3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.259607  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key ...
	I1006 14:44:43.259619  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key: {Name:mk0ae3492597f7c1edf0d7262770452fa244a40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.265151  682995 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710
	I1006 14:44:43.265175  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1006 14:44:43.807062  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 ...
	I1006 14:44:43.807095  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710: {Name:mk30dd14f07a4b732bb60853cc2fd5f84f73e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807283  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 ...
	I1006 14:44:43.807298  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710: {Name:mkf3f5fbdf7957143c03cb611320a2e02acb94c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807374  682995 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:44:43.807489  682995 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:44:43.807558  682995 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:44:43.807574  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt with IP's: []
	I1006 14:44:43.994115  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt ...
	I1006 14:44:43.994149  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt: {Name:mk715c6902e25626016d7eb8fdb7b52f0fdce895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994338  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key ...
	I1006 14:44:43.994350  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key: {Name:mka438ddf42b96ca34511dda1ce60f08f1d48b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994429  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:44:43.994449  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:44:43.994460  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:44:43.994470  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:44:43.994480  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:44:43.994490  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:44:43.994510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:44:43.994522  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:44:43.994570  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:44:43.994617  682995 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:44:43.994630  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:44:43.994653  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:44:43.994674  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:44:43.994701  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:44:43.994739  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:43.994772  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:44:43.994786  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:43.994798  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:44:43.995423  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:44:44.014422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:44:44.032422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:44:44.050727  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:44:44.068490  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:44:44.085540  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:44:44.102941  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:44:44.121043  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:44:44.139583  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:44:44.159654  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:44:44.176939  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:44:44.194332  682995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:44:44.207641  682995 ssh_runner.go:195] Run: openssl version
	I1006 14:44:44.214349  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:44:44.223426  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227339  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227401  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.261578  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:44:44.270472  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:44:44.279083  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282749  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282813  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.316484  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:44:44.325228  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:44:44.334098  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.337988  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.338051  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.371914  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:44:44.380847  682995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:44:44.384643  682995 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:44:44.384694  682995 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:44:44.384758  682995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:44:44.384823  682995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:44:44.413083  682995 cri.go:89] found id: ""
	I1006 14:44:44.413145  682995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:44:44.421446  682995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:44:44.429380  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:44:44.429431  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:44:44.437643  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:44:44.437667  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:44:44.437726  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:44:44.445948  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:44:44.446021  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:44:44.453451  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:44:44.460986  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:44:44.461064  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:44:44.468259  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.475830  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:44:44.475882  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.483080  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:44:44.490569  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:44:44.490632  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:44:44.498056  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:44:44.560210  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:44:44.618315  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:48:49.762009  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:48:49.762136  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:48:49.765019  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:49.765065  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:49.765142  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:49.765192  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:49.765263  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:49.765329  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:49.765384  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:49.765424  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:49.765465  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:49.765507  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:49.765557  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:49.765644  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:49.765713  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:49.765816  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:49.765897  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:49.765974  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:49.766033  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:49.768189  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:49.768304  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:49.768391  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:49.768495  682995 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:48:49.768546  682995 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:48:49.768600  682995 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:48:49.768641  682995 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:48:49.768684  682995 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:48:49.768778  682995 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.768847  682995 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:48:49.768982  682995 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.769042  682995 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:48:49.769108  682995 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:48:49.769166  682995 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:48:49.769263  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:49.769339  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:49.769416  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:49.769489  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:49.769549  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:49.769601  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:49.769671  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:49.769753  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:49.771489  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:49.771577  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:49.771664  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:49.771742  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:49.771858  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:49.771974  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:49.772108  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:49.772220  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:49.772288  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:49.772413  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:49.772556  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:49.772647  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501252368s
	I1006 14:48:49.772772  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:49.772891  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:49.772971  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:49.773033  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:48:49.773108  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	I1006 14:48:49.773189  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	I1006 14:48:49.773304  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	I1006 14:48:49.773319  682995 kubeadm.go:318] 
	I1006 14:48:49.773407  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:48:49.773472  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:48:49.773545  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:48:49.773657  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:48:49.773771  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:48:49.773850  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:48:49.773891  682995 kubeadm.go:318] 
	W1006 14:48:49.774048  682995 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501252368s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:48:49.774147  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:48:52.524900  682995 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.75072398s)
	I1006 14:48:52.524985  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:48:52.538104  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:48:52.538173  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:48:52.546610  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:48:52.546639  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:48:52.546692  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:48:52.555271  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:48:52.555334  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:48:52.564502  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:48:52.572861  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:48:52.572925  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:48:52.580681  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.588574  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:48:52.588636  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.596314  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:48:52.604007  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:48:52.604073  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:48:52.611967  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:48:52.650794  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:52.650844  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:52.671446  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:52.671559  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:52.671628  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:52.671718  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:52.671766  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:52.671811  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:52.671850  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:52.671890  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:52.671928  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:52.671972  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:52.672010  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:52.732758  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:52.732876  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:52.732979  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:52.739914  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:52.743428  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:52.743535  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:52.743654  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:52.743727  682995 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:48:52.743777  682995 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:48:52.743861  682995 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:48:52.743911  682995 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:48:52.743985  682995 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:48:52.744055  682995 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:48:52.744143  682995 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:48:52.744228  682995 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:48:52.744266  682995 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:48:52.744323  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:53.107297  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:53.300701  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:53.503166  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:53.664024  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:53.725865  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:53.726293  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:53.728797  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:53.730586  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:53.730720  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:53.730830  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:53.730903  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:53.744534  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:53.744672  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:53.752267  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:53.752422  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:53.752505  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:53.852049  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:53.852226  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:54.353729  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.825241ms
	I1006 14:48:54.356542  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:54.356619  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:54.356695  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:54.356819  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:52:54.358331  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	I1006 14:52:54.358653  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	I1006 14:52:54.358853  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	I1006 14:52:54.358881  682995 kubeadm.go:318] 
	I1006 14:52:54.359059  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:52:54.359298  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:52:54.359539  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:52:54.359760  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:52:54.359952  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:52:54.360116  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:52:54.360148  682995 kubeadm.go:318] 
	I1006 14:52:54.363033  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:52:54.363163  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:52:54.363696  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:52:54.363761  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:52:54.363858  682995 kubeadm.go:402] duration metric: took 8m9.979166519s to StartCluster
	I1006 14:52:54.363946  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:52:54.364031  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:52:54.392579  682995 cri.go:89] found id: ""
	I1006 14:52:54.392622  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.392631  682995 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:52:54.392638  682995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:52:54.392693  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:52:54.420188  682995 cri.go:89] found id: ""
	I1006 14:52:54.420226  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.420237  682995 logs.go:284] No container was found matching "etcd"
	I1006 14:52:54.420245  682995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:52:54.420299  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:52:54.445694  682995 cri.go:89] found id: ""
	I1006 14:52:54.445723  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.445733  682995 logs.go:284] No container was found matching "coredns"
	I1006 14:52:54.445740  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:52:54.445791  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:52:54.471923  682995 cri.go:89] found id: ""
	I1006 14:52:54.471954  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.471962  682995 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:52:54.471971  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:52:54.472030  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:52:54.498805  682995 cri.go:89] found id: ""
	I1006 14:52:54.498836  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.498848  682995 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:52:54.498857  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:52:54.498922  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:52:54.524613  682995 cri.go:89] found id: ""
	I1006 14:52:54.524638  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.524646  682995 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:52:54.524652  682995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:52:54.524708  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:52:54.551140  682995 cri.go:89] found id: ""
	I1006 14:52:54.551170  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.551181  682995 logs.go:284] No container was found matching "kindnet"
	I1006 14:52:54.551194  682995 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:52:54.551220  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:52:54.615573  682995 logs.go:123] Gathering logs for container status ...
	I1006 14:52:54.615607  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:52:54.645703  682995 logs.go:123] Gathering logs for kubelet ...
	I1006 14:52:54.645732  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:52:54.709506  682995 logs.go:123] Gathering logs for dmesg ...
	I1006 14:52:54.709543  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:52:54.722963  682995 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:52:54.722997  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:52:54.783016  682995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1006 14:52:54.783054  682995 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:52:54.783107  682995 out.go:285] * 
	W1006 14:52:54.783182  682995 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.783200  682995 out.go:285] * 
	W1006 14:52:54.785658  682995 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:52:54.789273  682995 out.go:203] 
	W1006 14:52:54.790573  682995 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.790604  682995 out.go:285] * 
	I1006 14:52:54.791821  682995 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:54:18 ha-481559 crio[777]: time="2025-10-06T14:54:18.248470804Z" level=info msg="createCtr: removing container fa77408e15584543e6120d9368eb14b8934e9db4687dc2fbecb8a799e7bc2c8a" id=2cc177f4-c68b-41bb-854c-9b932542a4d9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:18 ha-481559 crio[777]: time="2025-10-06T14:54:18.248499912Z" level=info msg="createCtr: deleting container fa77408e15584543e6120d9368eb14b8934e9db4687dc2fbecb8a799e7bc2c8a from storage" id=2cc177f4-c68b-41bb-854c-9b932542a4d9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:18 ha-481559 crio[777]: time="2025-10-06T14:54:18.250376963Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=2cc177f4-c68b-41bb-854c-9b932542a4d9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.221669418Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d74faff1-6f9c-4b14-8742-e1b6f37f0a5e name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.222583794Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=bf74765d-5b49-40b2-947c-7b7de3af1367 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.223547801Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-481559/kube-apiserver" id=ea7eba65-7901-4b0a-8dc1-512a8ae40c08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.223752683Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.227430642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.227880877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.248638243Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ea7eba65-7901-4b0a-8dc1-512a8ae40c08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.250102327Z" level=info msg="createCtr: deleting container ID 44bd9526540da0c835e3df8165ad6d393d56f73b366013b65c8d81c87c72a71c from idIndex" id=ea7eba65-7901-4b0a-8dc1-512a8ae40c08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.250142803Z" level=info msg="createCtr: removing container 44bd9526540da0c835e3df8165ad6d393d56f73b366013b65c8d81c87c72a71c" id=ea7eba65-7901-4b0a-8dc1-512a8ae40c08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.250176316Z" level=info msg="createCtr: deleting container 44bd9526540da0c835e3df8165ad6d393d56f73b366013b65c8d81c87c72a71c from storage" id=ea7eba65-7901-4b0a-8dc1-512a8ae40c08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.252283198Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-481559_kube-system_b4e1cca8a09d3789a7e0862428dfe0db_0" id=ea7eba65-7901-4b0a-8dc1-512a8ae40c08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.221551253Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=17d78e94-e04d-498b-bc45-5d0791b6c8a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.222818644Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=47a2165f-a6f2-492d-9d1e-29dca5bcd8d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.223780089Z" level=info msg="Creating container: kube-system/etcd-ha-481559/etcd" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.224008001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.227534186Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.227929945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.244953858Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246382165Z" level=info msg="createCtr: deleting container ID c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485 from idIndex" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246426068Z" level=info msg="createCtr: removing container c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246465998Z" level=info msg="createCtr: deleting container c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485 from storage" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.249858456Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:54:30.149174    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:30.149830    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:30.151420    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:30.151845    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:30.153522    3234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:54:30 up  5:36,  0 user,  load average: 0.34, 0.11, 0.16
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:54:19 ha-481559 kubelet[1985]: E1006 14:54:19.037620    1985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bee56630f6256  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,LastTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 14:54:21 ha-481559 kubelet[1985]: E1006 14:54:21.860396    1985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 14:54:22 ha-481559 kubelet[1985]: I1006 14:54:22.037492    1985 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 14:54:22 ha-481559 kubelet[1985]: E1006 14:54:22.037882    1985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 14:54:24 ha-481559 kubelet[1985]: E1006 14:54:24.244954    1985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	Oct 06 14:54:27 ha-481559 kubelet[1985]: E1006 14:54:27.221162    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:27 ha-481559 kubelet[1985]: E1006 14:54:27.252615    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:27 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:27 ha-481559 kubelet[1985]:  > podSandboxID="cadd804367d6dcdba2fb49fe06e3c1db8b35e6ee5c505328925ae346d4cdb867"
	Oct 06 14:54:27 ha-481559 kubelet[1985]: E1006 14:54:27.252723    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:27 ha-481559 kubelet[1985]:         container kube-apiserver start failed in pod kube-apiserver-ha-481559_kube-system(b4e1cca8a09d3789a7e0862428dfe0db): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:27 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:27 ha-481559 kubelet[1985]: E1006 14:54:27.252753    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-481559" podUID="b4e1cca8a09d3789a7e0862428dfe0db"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.221034    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.250166    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:28 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:28 ha-481559 kubelet[1985]:  > podSandboxID="a7ce34bebe17bc556bee492a72e0243ebe86fdfcd40a6e28aafa4e286d225bc6"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.250298    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:28 ha-481559 kubelet[1985]:         container etcd start failed in pod etcd-ha-481559_kube-system(520c6060936b1c2aac479c99ed6c0355): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:28 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.250344    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.861462    1985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: E1006 14:54:29.038379    1985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bee56630f6256  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,LastTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: I1006 14:54:29.038950    1985 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: E1006 14:54:29.039334    1985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 6 (302.483176ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:30.533261  690313 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 node add --alsologtostderr -v 5: exit status 103 (258.557933ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-481559 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-481559"

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:54:30.594973  690428 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:54:30.595283  690428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:30.595294  690428 out.go:374] Setting ErrFile to fd 2...
	I1006 14:54:30.595301  690428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:30.595506  690428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:54:30.595843  690428 mustload.go:65] Loading cluster: ha-481559
	I1006 14:54:30.596241  690428 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:54:30.596662  690428 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:54:30.614405  690428 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:30.614682  690428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:54:30.671927  690428 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:54:30.661837416 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:54:30.672043  690428 api_server.go:166] Checking apiserver status ...
	I1006 14:54:30.672088  690428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:54:30.672124  690428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:54:30.689049  690428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	W1006 14:54:30.793835  690428 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:54:30.800969  690428 out.go:179] * The control-plane node ha-481559 apiserver is not running: (state=Stopped)
	I1006 14:54:30.802032  690428 out.go:179]   To start a cluster, run: "minikube start -p ha-481559"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-481559 node add --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 683567,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:44:39.660699919Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7effae92997970d320561b0b86c210815b18a55d65bd555e1bff50158ed38adc",
	            "SandboxKey": "/var/run/docker/netns/7effae929979",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32883"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32884"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:f3:45:3f:5b:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "b8540561692606ad815fcdb4502c1e36a16141413d3697f4cf48668502930e4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 6 (290.664366ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:31.101709  690534 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-135520 image ls --format yaml --alsologtostderr                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh pgrep buildkitd                                                                           │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image   │ functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image ls                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ delete  │ -p functional-135520                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │ 06 Oct 25 14:44 UTC │
	│ start   │ ha-481559 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- rollout status deployment/busybox                                                          │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node add --alsologtostderr -v 5                                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:44:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:44:34.230587  682995 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:44:34.230719  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230728  682995 out.go:374] Setting ErrFile to fd 2...
	I1006 14:44:34.230733  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230969  682995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:44:34.231523  682995 out.go:368] Setting JSON to false
	I1006 14:44:34.232538  682995 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19610,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:44:34.232651  682995 start.go:140] virtualization: kvm guest
	I1006 14:44:34.235278  682995 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:44:34.236668  682995 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:44:34.236708  682995 notify.go:220] Checking for updates...
	I1006 14:44:34.239256  682995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:44:34.240475  682995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:44:34.242249  682995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:44:34.243577  682995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:44:34.244737  682995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:44:34.246267  682995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:44:34.271626  682995 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:44:34.271783  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.334697  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.323928193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.334819  682995 docker.go:318] overlay module found
	I1006 14:44:34.336770  682995 out.go:179] * Using the docker driver based on user configuration
	I1006 14:44:34.338109  682995 start.go:304] selected driver: docker
	I1006 14:44:34.338130  682995 start.go:924] validating driver "docker" against <nil>
	I1006 14:44:34.338144  682995 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:44:34.338750  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.398314  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.387376197 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.398587  682995 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:44:34.399080  682995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:44:34.401095  682995 out.go:179] * Using Docker driver with root privileges
	I1006 14:44:34.402283  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:34.402367  682995 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1006 14:44:34.402383  682995 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 14:44:34.402476  682995 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1006 14:44:34.403829  682995 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:44:34.404899  682995 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:44:34.406166  682995 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:44:34.407227  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.407272  682995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:44:34.407284  682995 cache.go:58] Caching tarball of preloaded images
	I1006 14:44:34.407376  682995 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:44:34.407382  682995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:44:34.407387  682995 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:44:34.407757  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:34.407793  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json: {Name:mkefd90ec0b9eae63c82d60bab053cdf7b5d9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:34.429193  682995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:44:34.429233  682995 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:44:34.429254  682995 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:44:34.429296  682995 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:44:34.429397  682995 start.go:364] duration metric: took 84.055µs to acquireMachinesLock for "ha-481559"
	I1006 14:44:34.429421  682995 start.go:93] Provisioning new machine with config: &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:44:34.429503  682995 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:44:34.431456  682995 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 14:44:34.431692  682995 start.go:159] libmachine.API.Create for "ha-481559" (driver="docker")
	I1006 14:44:34.431725  682995 client.go:168] LocalClient.Create starting
	I1006 14:44:34.431791  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem
	I1006 14:44:34.431825  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431843  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.431939  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem
	I1006 14:44:34.431977  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431994  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.432416  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:44:34.449965  682995 cli_runner.go:211] docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:44:34.450053  682995 network_create.go:284] running [docker network inspect ha-481559] to gather additional debugging logs...
	I1006 14:44:34.450071  682995 cli_runner.go:164] Run: docker network inspect ha-481559
	W1006 14:44:34.468682  682995 cli_runner.go:211] docker network inspect ha-481559 returned with exit code 1
	I1006 14:44:34.468713  682995 network_create.go:287] error running [docker network inspect ha-481559]: docker network inspect ha-481559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-481559 not found
	I1006 14:44:34.468724  682995 network_create.go:289] output of [docker network inspect ha-481559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-481559 not found
	
	** /stderr **
	I1006 14:44:34.468902  682995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:34.488223  682995 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca2540}
	I1006 14:44:34.488276  682995 network_create.go:124] attempt to create docker network ha-481559 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:44:34.488338  682995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-481559 ha-481559
	I1006 14:44:34.548630  682995 network_create.go:108] docker network ha-481559 192.168.49.0/24 created
	I1006 14:44:34.548669  682995 kic.go:121] calculated static IP "192.168.49.2" for the "ha-481559" container
	I1006 14:44:34.548729  682995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:44:34.566959  682995 cli_runner.go:164] Run: docker volume create ha-481559 --label name.minikube.sigs.k8s.io=ha-481559 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:44:34.586001  682995 oci.go:103] Successfully created a docker volume ha-481559
	I1006 14:44:34.586088  682995 cli_runner.go:164] Run: docker run --rm --name ha-481559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --entrypoint /usr/bin/test -v ha-481559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:44:34.994169  682995 oci.go:107] Successfully prepared a docker volume ha-481559
	I1006 14:44:34.994233  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.994280  682995 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:44:34.994349  682995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:44:39.551248  682995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.556814521s)
	I1006 14:44:39.551287  682995 kic.go:203] duration metric: took 4.557022471s to extract preloaded images to volume ...
	W1006 14:44:39.551374  682995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1006 14:44:39.551406  682995 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1006 14:44:39.551451  682995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:44:39.608040  682995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-481559 --name ha-481559 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-481559 --network ha-481559 --ip 192.168.49.2 --volume ha-481559:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:44:39.865946  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Running}}
	I1006 14:44:39.883061  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:39.901066  682995 cli_runner.go:164] Run: docker exec ha-481559 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:44:39.951869  682995 oci.go:144] the created container "ha-481559" has a running status.
	I1006 14:44:39.951908  682995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa...
	I1006 14:44:40.176341  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 14:44:40.176392  682995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:44:40.205643  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.227924  682995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:44:40.227948  682995 kic_runner.go:114] Args: [docker exec --privileged ha-481559 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:44:40.277808  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.297063  682995 machine.go:93] provisionDockerMachine start ...
	I1006 14:44:40.297156  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.315828  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.316109  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.316124  682995 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:44:40.461735  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.461771  682995 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:44:40.461843  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.481222  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.481551  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.481575  682995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:44:40.636624  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.636709  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.655017  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.655283  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.655302  682995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:44:40.801276  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:44:40.801313  682995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:44:40.801332  682995 ubuntu.go:190] setting up certificates
	I1006 14:44:40.801344  682995 provision.go:84] configureAuth start
	I1006 14:44:40.801398  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:40.819000  682995 provision.go:143] copyHostCerts
	I1006 14:44:40.819052  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819089  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:44:40.819099  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819169  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:44:40.819281  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819304  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:44:40.819309  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819338  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:44:40.819400  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819416  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:44:40.819428  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819460  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:44:40.819525  682995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:44:40.896257  682995 provision.go:177] copyRemoteCerts
	I1006 14:44:40.896328  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:44:40.896370  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.914092  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.016898  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:44:41.016969  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:44:41.037131  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:44:41.037215  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:44:41.055180  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:44:41.055258  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:44:41.073045  682995 provision.go:87] duration metric: took 271.684433ms to configureAuth
	I1006 14:44:41.073074  682995 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:44:41.073312  682995 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:44:41.073456  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.092548  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:41.092838  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:41.092869  682995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:44:41.356221  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:44:41.356247  682995 machine.go:96] duration metric: took 1.059160507s to provisionDockerMachine
	I1006 14:44:41.356259  682995 client.go:171] duration metric: took 6.924524382s to LocalClient.Create
	I1006 14:44:41.356282  682995 start.go:167] duration metric: took 6.924591304s to libmachine.API.Create "ha-481559"
	I1006 14:44:41.356295  682995 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:44:41.356322  682995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:44:41.356396  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:44:41.356453  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.374424  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.479545  682995 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:44:41.483318  682995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:44:41.483345  682995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:44:41.483356  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:44:41.483402  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:44:41.483499  682995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:44:41.483510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:44:41.483603  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:44:41.491409  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:41.511609  682995 start.go:296] duration metric: took 155.29938ms for postStartSetup
	I1006 14:44:41.511914  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.529867  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:41.530158  682995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:44:41.530223  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.547995  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.647810  682995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:44:41.652637  682995 start.go:128] duration metric: took 7.223117194s to createHost
	I1006 14:44:41.652662  682995 start.go:83] releasing machines lock for "ha-481559", held for 7.223254897s
	I1006 14:44:41.652730  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.670486  682995 ssh_runner.go:195] Run: cat /version.json
	I1006 14:44:41.670511  682995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:44:41.670555  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.670581  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.689278  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.689801  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.845142  682995 ssh_runner.go:195] Run: systemctl --version
	I1006 14:44:41.852333  682995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:44:41.886799  682995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:44:41.891575  682995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:44:41.891645  682995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:44:41.918020  682995 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 14:44:41.918049  682995 start.go:495] detecting cgroup driver to use...
	I1006 14:44:41.918088  682995 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:44:41.918148  682995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:44:41.934827  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:44:41.946573  682995 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:44:41.946626  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:44:41.961811  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:44:41.978333  682995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:44:42.056893  682995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:44:42.140645  682995 docker.go:234] disabling docker service ...
	I1006 14:44:42.140713  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:44:42.159372  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:44:42.171857  682995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:44:42.255908  682995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:44:42.340081  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:44:42.352916  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:44:42.367142  682995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:44:42.367215  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.377866  682995 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:44:42.377939  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.387157  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.395944  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.404768  682995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:44:42.412712  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.420910  682995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.434108  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.442895  682995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:44:42.450289  682995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:44:42.457667  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:42.535385  682995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:44:42.643348  682995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:44:42.643424  682995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:44:42.647404  682995 start.go:563] Will wait 60s for crictl version
	I1006 14:44:42.647467  682995 ssh_runner.go:195] Run: which crictl
	I1006 14:44:42.651000  682995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:44:42.675962  682995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:44:42.676044  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.705541  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.736773  682995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:44:42.738090  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:42.754892  682995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:44:42.759274  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.770415  682995 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:44:42.770534  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:42.770581  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.805187  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.805221  682995 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:44:42.805274  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.831096  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.831123  682995 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:44:42.831132  682995 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:44:42.831244  682995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:44:42.831321  682995 ssh_runner.go:195] Run: crio config
	I1006 14:44:42.877768  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:42.877790  682995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:44:42.877819  682995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:44:42.877840  682995 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:44:42.877966  682995 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:44:42.877993  682995 kube-vip.go:115] generating kube-vip config ...
	I1006 14:44:42.878035  682995 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1006 14:44:42.890886  682995 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:44:42.890995  682995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1006 14:44:42.891046  682995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:44:42.899063  682995 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:44:42.899132  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1006 14:44:42.906926  682995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:44:42.919358  682995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:44:42.934141  682995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:44:42.945961  682995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1006 14:44:42.959489  682995 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1006 14:44:42.962953  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.972760  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:43.053996  682995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:44:43.077665  682995 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:44:43.077692  682995 certs.go:195] generating shared ca certs ...
	I1006 14:44:43.077714  682995 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.077856  682995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:44:43.077899  682995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:44:43.077909  682995 certs.go:257] generating profile certs ...
	I1006 14:44:43.077963  682995 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:44:43.077983  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt with IP's: []
	I1006 14:44:43.259387  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt ...
	I1006 14:44:43.259418  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt: {Name:mk058803c7a7f0f2aa3fb547a3aafbba9518c3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.259607  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key ...
	I1006 14:44:43.259619  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key: {Name:mk0ae3492597f7c1edf0d7262770452fa244a40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.265151  682995 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710
	I1006 14:44:43.265175  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1006 14:44:43.807062  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 ...
	I1006 14:44:43.807095  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710: {Name:mk30dd14f07a4b732bb60853cc2fd5f84f73e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807283  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 ...
	I1006 14:44:43.807298  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710: {Name:mkf3f5fbdf7957143c03cb611320a2e02acb94c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807374  682995 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:44:43.807489  682995 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:44:43.807558  682995 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:44:43.807574  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt with IP's: []
	I1006 14:44:43.994115  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt ...
	I1006 14:44:43.994149  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt: {Name:mk715c6902e25626016d7eb8fdb7b52f0fdce895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994338  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key ...
	I1006 14:44:43.994350  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key: {Name:mka438ddf42b96ca34511dda1ce60f08f1d48b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994429  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:44:43.994449  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:44:43.994460  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:44:43.994470  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:44:43.994480  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:44:43.994490  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:44:43.994510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:44:43.994522  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:44:43.994570  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:44:43.994617  682995 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:44:43.994630  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:44:43.994653  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:44:43.994674  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:44:43.994701  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:44:43.994739  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:43.994772  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:44:43.994786  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:43.994798  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:44:43.995423  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:44:44.014422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:44:44.032422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:44:44.050727  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:44:44.068490  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:44:44.085540  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:44:44.102941  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:44:44.121043  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:44:44.139583  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:44:44.159654  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:44:44.176939  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:44:44.194332  682995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:44:44.207641  682995 ssh_runner.go:195] Run: openssl version
	I1006 14:44:44.214349  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:44:44.223426  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227339  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227401  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.261578  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:44:44.270472  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:44:44.279083  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282749  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282813  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.316484  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:44:44.325228  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:44:44.334098  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.337988  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.338051  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.371914  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:44:44.380847  682995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:44:44.384643  682995 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:44:44.384694  682995 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:44:44.384758  682995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:44:44.384823  682995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:44:44.413083  682995 cri.go:89] found id: ""
	I1006 14:44:44.413145  682995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:44:44.421446  682995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:44:44.429380  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:44:44.429431  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:44:44.437643  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:44:44.437667  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:44:44.437726  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:44:44.445948  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:44:44.446021  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:44:44.453451  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:44:44.460986  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:44:44.461064  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:44:44.468259  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.475830  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:44:44.475882  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.483080  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:44:44.490569  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:44:44.490632  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:44:44.498056  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:44:44.560210  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:44:44.618315  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:48:49.762009  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:48:49.762136  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:48:49.765019  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:49.765065  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:49.765142  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:49.765192  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:49.765263  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:49.765329  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:49.765384  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:49.765424  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:49.765465  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:49.765507  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:49.765557  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:49.765644  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:49.765713  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:49.765816  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:49.765897  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:49.765974  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:49.766033  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:49.768189  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:49.768304  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:49.768391  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:49.768495  682995 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:48:49.768546  682995 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:48:49.768600  682995 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:48:49.768641  682995 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:48:49.768684  682995 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:48:49.768778  682995 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.768847  682995 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:48:49.768982  682995 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.769042  682995 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:48:49.769108  682995 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:48:49.769166  682995 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:48:49.769263  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:49.769339  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:49.769416  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:49.769489  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:49.769549  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:49.769601  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:49.769671  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:49.769753  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:49.771489  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:49.771577  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:49.771664  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:49.771742  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:49.771858  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:49.771974  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:49.772108  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:49.772220  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:49.772288  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:49.772413  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:49.772556  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:49.772647  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501252368s
	I1006 14:48:49.772772  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:49.772891  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:49.772971  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:49.773033  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:48:49.773108  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	I1006 14:48:49.773189  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	I1006 14:48:49.773304  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	I1006 14:48:49.773319  682995 kubeadm.go:318] 
	I1006 14:48:49.773407  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:48:49.773472  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:48:49.773545  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:48:49.773657  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:48:49.773771  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:48:49.773850  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:48:49.773891  682995 kubeadm.go:318] 
	W1006 14:48:49.774048  682995 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501252368s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:48:49.774147  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:48:52.524900  682995 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.75072398s)
	I1006 14:48:52.524985  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:48:52.538104  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:48:52.538173  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:48:52.546610  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:48:52.546639  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:48:52.546692  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:48:52.555271  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:48:52.555334  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:48:52.564502  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:48:52.572861  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:48:52.572925  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:48:52.580681  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.588574  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:48:52.588636  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.596314  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:48:52.604007  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:48:52.604073  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:48:52.611967  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:48:52.650794  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:52.650844  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:52.671446  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:52.671559  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:52.671628  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:52.671718  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:52.671766  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:52.671811  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:52.671850  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:52.671890  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:52.671928  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:52.671972  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:52.672010  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:52.732758  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:52.732876  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:52.732979  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:52.739914  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:52.743428  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:52.743535  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:52.743654  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:52.743727  682995 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:48:52.743777  682995 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:48:52.743861  682995 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:48:52.743911  682995 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:48:52.743985  682995 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:48:52.744055  682995 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:48:52.744143  682995 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:48:52.744228  682995 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:48:52.744266  682995 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:48:52.744323  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:53.107297  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:53.300701  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:53.503166  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:53.664024  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:53.725865  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:53.726293  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:53.728797  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:53.730586  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:53.730720  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:53.730830  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:53.730903  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:53.744534  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:53.744672  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:53.752267  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:53.752422  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:53.752505  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:53.852049  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:53.852226  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:54.353729  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.825241ms
	I1006 14:48:54.356542  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:54.356619  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:54.356695  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:54.356819  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:52:54.358331  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	I1006 14:52:54.358653  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	I1006 14:52:54.358853  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	I1006 14:52:54.358881  682995 kubeadm.go:318] 
	I1006 14:52:54.359059  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:52:54.359298  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:52:54.359539  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:52:54.359760  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:52:54.359952  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:52:54.360116  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:52:54.360148  682995 kubeadm.go:318] 
	I1006 14:52:54.363033  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:52:54.363163  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:52:54.363696  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:52:54.363761  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:52:54.363858  682995 kubeadm.go:402] duration metric: took 8m9.979166519s to StartCluster
	I1006 14:52:54.363946  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:52:54.364031  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:52:54.392579  682995 cri.go:89] found id: ""
	I1006 14:52:54.392622  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.392631  682995 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:52:54.392638  682995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:52:54.392693  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:52:54.420188  682995 cri.go:89] found id: ""
	I1006 14:52:54.420226  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.420237  682995 logs.go:284] No container was found matching "etcd"
	I1006 14:52:54.420245  682995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:52:54.420299  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:52:54.445694  682995 cri.go:89] found id: ""
	I1006 14:52:54.445723  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.445733  682995 logs.go:284] No container was found matching "coredns"
	I1006 14:52:54.445740  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:52:54.445791  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:52:54.471923  682995 cri.go:89] found id: ""
	I1006 14:52:54.471954  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.471962  682995 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:52:54.471971  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:52:54.472030  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:52:54.498805  682995 cri.go:89] found id: ""
	I1006 14:52:54.498836  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.498848  682995 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:52:54.498857  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:52:54.498922  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:52:54.524613  682995 cri.go:89] found id: ""
	I1006 14:52:54.524638  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.524646  682995 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:52:54.524652  682995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:52:54.524708  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:52:54.551140  682995 cri.go:89] found id: ""
	I1006 14:52:54.551170  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.551181  682995 logs.go:284] No container was found matching "kindnet"
	I1006 14:52:54.551194  682995 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:52:54.551220  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:52:54.615573  682995 logs.go:123] Gathering logs for container status ...
	I1006 14:52:54.615607  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:52:54.645703  682995 logs.go:123] Gathering logs for kubelet ...
	I1006 14:52:54.645732  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:52:54.709506  682995 logs.go:123] Gathering logs for dmesg ...
	I1006 14:52:54.709543  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:52:54.722963  682995 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:52:54.722997  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:52:54.783016  682995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1006 14:52:54.783054  682995 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:52:54.783107  682995 out.go:285] * 
	W1006 14:52:54.783182  682995 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.783200  682995 out.go:285] * 
	W1006 14:52:54.785658  682995 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:52:54.789273  682995 out.go:203] 
	W1006 14:52:54.790573  682995 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.790604  682995 out.go:285] * 
	I1006 14:52:54.791821  682995 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.250142803Z" level=info msg="createCtr: removing container 44bd9526540da0c835e3df8165ad6d393d56f73b366013b65c8d81c87c72a71c" id=ea7eba65-7901-4b0a-8dc1-512a8ae40c08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.250176316Z" level=info msg="createCtr: deleting container 44bd9526540da0c835e3df8165ad6d393d56f73b366013b65c8d81c87c72a71c from storage" id=ea7eba65-7901-4b0a-8dc1-512a8ae40c08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:27 ha-481559 crio[777]: time="2025-10-06T14:54:27.252283198Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-481559_kube-system_b4e1cca8a09d3789a7e0862428dfe0db_0" id=ea7eba65-7901-4b0a-8dc1-512a8ae40c08 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.221551253Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=17d78e94-e04d-498b-bc45-5d0791b6c8a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.222818644Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=47a2165f-a6f2-492d-9d1e-29dca5bcd8d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.223780089Z" level=info msg="Creating container: kube-system/etcd-ha-481559/etcd" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.224008001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.227534186Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.227929945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.244953858Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246382165Z" level=info msg="createCtr: deleting container ID c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485 from idIndex" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246426068Z" level=info msg="createCtr: removing container c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246465998Z" level=info msg="createCtr: deleting container c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485 from storage" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.249858456Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.222474023Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=e2afb2cc-8b95-45ef-839d-d0dd5c34800d name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.22368953Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=401de123-749a-4ebc-8ab1-078ad9c73c34 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.224710592Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-481559/kube-scheduler" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.225088554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.22924992Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.229878709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.249923087Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251834213Z" level=info msg="createCtr: deleting container ID 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368 from idIndex" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251887095Z" level=info msg="createCtr: removing container 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251932195Z" level=info msg="createCtr: deleting container 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368 from storage" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.2573433Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:54:31.686507    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:31.687016    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:31.688646    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:31.689094    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:31.690952    3404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:54:31 up  5:36,  0 user,  load average: 0.34, 0.11, 0.16
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:54:27 ha-481559 kubelet[1985]:  > podSandboxID="cadd804367d6dcdba2fb49fe06e3c1db8b35e6ee5c505328925ae346d4cdb867"
	Oct 06 14:54:27 ha-481559 kubelet[1985]: E1006 14:54:27.252723    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:27 ha-481559 kubelet[1985]:         container kube-apiserver start failed in pod kube-apiserver-ha-481559_kube-system(b4e1cca8a09d3789a7e0862428dfe0db): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:27 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:27 ha-481559 kubelet[1985]: E1006 14:54:27.252753    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-481559" podUID="b4e1cca8a09d3789a7e0862428dfe0db"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.221034    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.250166    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:28 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:28 ha-481559 kubelet[1985]:  > podSandboxID="a7ce34bebe17bc556bee492a72e0243ebe86fdfcd40a6e28aafa4e286d225bc6"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.250298    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:28 ha-481559 kubelet[1985]:         container etcd start failed in pod etcd-ha-481559_kube-system(520c6060936b1c2aac479c99ed6c0355): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:28 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.250344    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.861462    1985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: E1006 14:54:29.038379    1985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bee56630f6256  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,LastTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: I1006 14:54:29.038950    1985 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: E1006 14:54:29.039334    1985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.221903    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257771    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:30 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:30 ha-481559 kubelet[1985]:  > podSandboxID="28815a6c32deaa458111079bbac61f47b8e22f338f2282fab7d62077c8b07f1e"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257901    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:30 ha-481559 kubelet[1985]:         container kube-scheduler start failed in pod kube-scheduler-ha-481559_kube-system(cc93cb8d89afaa943672c70952b45174): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:30 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257947    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-481559" podUID="cc93cb8d89afaa943672c70952b45174"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 6 (299.892361ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:32.064537  690873 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (1.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-481559 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-481559 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (47.277246ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-481559

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-481559 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-481559 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 683567,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:44:39.660699919Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7effae92997970d320561b0b86c210815b18a55d65bd555e1bff50158ed38adc",
	            "SandboxKey": "/var/run/docker/netns/7effae929979",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32883"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32884"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:f3:45:3f:5b:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "b8540561692606ad815fcdb4502c1e36a16141413d3697f4cf48668502930e4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 6 (303.753229ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:32.434660  691011 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-135520 image ls --format yaml --alsologtostderr                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh pgrep buildkitd                                                                           │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image   │ functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image ls                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ delete  │ -p functional-135520                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │ 06 Oct 25 14:44 UTC │
	│ start   │ ha-481559 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- rollout status deployment/busybox                                                          │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node add --alsologtostderr -v 5                                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:44:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:44:34.230587  682995 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:44:34.230719  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230728  682995 out.go:374] Setting ErrFile to fd 2...
	I1006 14:44:34.230733  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230969  682995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:44:34.231523  682995 out.go:368] Setting JSON to false
	I1006 14:44:34.232538  682995 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19610,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:44:34.232651  682995 start.go:140] virtualization: kvm guest
	I1006 14:44:34.235278  682995 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:44:34.236668  682995 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:44:34.236708  682995 notify.go:220] Checking for updates...
	I1006 14:44:34.239256  682995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:44:34.240475  682995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:44:34.242249  682995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:44:34.243577  682995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:44:34.244737  682995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:44:34.246267  682995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:44:34.271626  682995 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:44:34.271783  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.334697  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.323928193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.334819  682995 docker.go:318] overlay module found
	I1006 14:44:34.336770  682995 out.go:179] * Using the docker driver based on user configuration
	I1006 14:44:34.338109  682995 start.go:304] selected driver: docker
	I1006 14:44:34.338130  682995 start.go:924] validating driver "docker" against <nil>
	I1006 14:44:34.338144  682995 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:44:34.338750  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.398314  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.387376197 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.398587  682995 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:44:34.399080  682995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:44:34.401095  682995 out.go:179] * Using Docker driver with root privileges
	I1006 14:44:34.402283  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:34.402367  682995 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1006 14:44:34.402383  682995 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 14:44:34.402476  682995 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1006 14:44:34.403829  682995 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:44:34.404899  682995 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:44:34.406166  682995 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:44:34.407227  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.407272  682995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:44:34.407284  682995 cache.go:58] Caching tarball of preloaded images
	I1006 14:44:34.407376  682995 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:44:34.407382  682995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:44:34.407387  682995 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:44:34.407757  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:34.407793  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json: {Name:mkefd90ec0b9eae63c82d60bab053cdf7b5d9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:34.429193  682995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:44:34.429233  682995 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:44:34.429254  682995 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:44:34.429296  682995 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:44:34.429397  682995 start.go:364] duration metric: took 84.055µs to acquireMachinesLock for "ha-481559"
	I1006 14:44:34.429421  682995 start.go:93] Provisioning new machine with config: &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:44:34.429503  682995 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:44:34.431456  682995 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 14:44:34.431692  682995 start.go:159] libmachine.API.Create for "ha-481559" (driver="docker")
	I1006 14:44:34.431725  682995 client.go:168] LocalClient.Create starting
	I1006 14:44:34.431791  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem
	I1006 14:44:34.431825  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431843  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.431939  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem
	I1006 14:44:34.431977  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431994  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.432416  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:44:34.449965  682995 cli_runner.go:211] docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:44:34.450053  682995 network_create.go:284] running [docker network inspect ha-481559] to gather additional debugging logs...
	I1006 14:44:34.450071  682995 cli_runner.go:164] Run: docker network inspect ha-481559
	W1006 14:44:34.468682  682995 cli_runner.go:211] docker network inspect ha-481559 returned with exit code 1
	I1006 14:44:34.468713  682995 network_create.go:287] error running [docker network inspect ha-481559]: docker network inspect ha-481559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-481559 not found
	I1006 14:44:34.468724  682995 network_create.go:289] output of [docker network inspect ha-481559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-481559 not found
	
	** /stderr **
	I1006 14:44:34.468902  682995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:34.488223  682995 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca2540}
	I1006 14:44:34.488276  682995 network_create.go:124] attempt to create docker network ha-481559 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:44:34.488338  682995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-481559 ha-481559
	I1006 14:44:34.548630  682995 network_create.go:108] docker network ha-481559 192.168.49.0/24 created
	I1006 14:44:34.548669  682995 kic.go:121] calculated static IP "192.168.49.2" for the "ha-481559" container
	I1006 14:44:34.548729  682995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:44:34.566959  682995 cli_runner.go:164] Run: docker volume create ha-481559 --label name.minikube.sigs.k8s.io=ha-481559 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:44:34.586001  682995 oci.go:103] Successfully created a docker volume ha-481559
	I1006 14:44:34.586088  682995 cli_runner.go:164] Run: docker run --rm --name ha-481559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --entrypoint /usr/bin/test -v ha-481559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:44:34.994169  682995 oci.go:107] Successfully prepared a docker volume ha-481559
	I1006 14:44:34.994233  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.994280  682995 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:44:34.994349  682995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:44:39.551248  682995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.556814521s)
	I1006 14:44:39.551287  682995 kic.go:203] duration metric: took 4.557022471s to extract preloaded images to volume ...
	W1006 14:44:39.551374  682995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1006 14:44:39.551406  682995 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1006 14:44:39.551451  682995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:44:39.608040  682995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-481559 --name ha-481559 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-481559 --network ha-481559 --ip 192.168.49.2 --volume ha-481559:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:44:39.865946  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Running}}
	I1006 14:44:39.883061  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:39.901066  682995 cli_runner.go:164] Run: docker exec ha-481559 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:44:39.951869  682995 oci.go:144] the created container "ha-481559" has a running status.
	I1006 14:44:39.951908  682995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa...
	I1006 14:44:40.176341  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 14:44:40.176392  682995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:44:40.205643  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.227924  682995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:44:40.227948  682995 kic_runner.go:114] Args: [docker exec --privileged ha-481559 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:44:40.277808  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.297063  682995 machine.go:93] provisionDockerMachine start ...
	I1006 14:44:40.297156  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.315828  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.316109  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.316124  682995 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:44:40.461735  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.461771  682995 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:44:40.461843  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.481222  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.481551  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.481575  682995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:44:40.636624  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.636709  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.655017  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.655283  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.655302  682995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:44:40.801276  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:44:40.801313  682995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:44:40.801332  682995 ubuntu.go:190] setting up certificates
	I1006 14:44:40.801344  682995 provision.go:84] configureAuth start
	I1006 14:44:40.801398  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:40.819000  682995 provision.go:143] copyHostCerts
	I1006 14:44:40.819052  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819089  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:44:40.819099  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819169  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:44:40.819281  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819304  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:44:40.819309  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819338  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:44:40.819400  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819416  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:44:40.819428  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819460  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:44:40.819525  682995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:44:40.896257  682995 provision.go:177] copyRemoteCerts
	I1006 14:44:40.896328  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:44:40.896370  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.914092  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.016898  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:44:41.016969  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:44:41.037131  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:44:41.037215  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:44:41.055180  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:44:41.055258  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:44:41.073045  682995 provision.go:87] duration metric: took 271.684433ms to configureAuth
	I1006 14:44:41.073074  682995 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:44:41.073312  682995 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:44:41.073456  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.092548  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:41.092838  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:41.092869  682995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:44:41.356221  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:44:41.356247  682995 machine.go:96] duration metric: took 1.059160507s to provisionDockerMachine
	I1006 14:44:41.356259  682995 client.go:171] duration metric: took 6.924524382s to LocalClient.Create
	I1006 14:44:41.356282  682995 start.go:167] duration metric: took 6.924591304s to libmachine.API.Create "ha-481559"
	I1006 14:44:41.356295  682995 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:44:41.356322  682995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:44:41.356396  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:44:41.356453  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.374424  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.479545  682995 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:44:41.483318  682995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:44:41.483345  682995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:44:41.483356  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:44:41.483402  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:44:41.483499  682995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:44:41.483510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:44:41.483603  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:44:41.491409  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:41.511609  682995 start.go:296] duration metric: took 155.29938ms for postStartSetup
	I1006 14:44:41.511914  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.529867  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:41.530158  682995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:44:41.530223  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.547995  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.647810  682995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:44:41.652637  682995 start.go:128] duration metric: took 7.223117194s to createHost
	I1006 14:44:41.652662  682995 start.go:83] releasing machines lock for "ha-481559", held for 7.223254897s
	I1006 14:44:41.652730  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.670486  682995 ssh_runner.go:195] Run: cat /version.json
	I1006 14:44:41.670511  682995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:44:41.670555  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.670581  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.689278  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.689801  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.845142  682995 ssh_runner.go:195] Run: systemctl --version
	I1006 14:44:41.852333  682995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:44:41.886799  682995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:44:41.891575  682995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:44:41.891645  682995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:44:41.918020  682995 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 14:44:41.918049  682995 start.go:495] detecting cgroup driver to use...
	I1006 14:44:41.918088  682995 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:44:41.918148  682995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:44:41.934827  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:44:41.946573  682995 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:44:41.946626  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:44:41.961811  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:44:41.978333  682995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:44:42.056893  682995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:44:42.140645  682995 docker.go:234] disabling docker service ...
	I1006 14:44:42.140713  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:44:42.159372  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:44:42.171857  682995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:44:42.255908  682995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:44:42.340081  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:44:42.352916  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:44:42.367142  682995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:44:42.367215  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.377866  682995 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:44:42.377939  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.387157  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.395944  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.404768  682995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:44:42.412712  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.420910  682995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.434108  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.442895  682995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:44:42.450289  682995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:44:42.457667  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:42.535385  682995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:44:42.643348  682995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:44:42.643424  682995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:44:42.647404  682995 start.go:563] Will wait 60s for crictl version
	I1006 14:44:42.647467  682995 ssh_runner.go:195] Run: which crictl
	I1006 14:44:42.651000  682995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:44:42.675962  682995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:44:42.676044  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.705541  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.736773  682995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:44:42.738090  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:42.754892  682995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:44:42.759274  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.770415  682995 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:44:42.770534  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:42.770581  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.805187  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.805221  682995 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:44:42.805274  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.831096  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.831123  682995 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:44:42.831132  682995 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:44:42.831244  682995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:44:42.831321  682995 ssh_runner.go:195] Run: crio config
	I1006 14:44:42.877768  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:42.877790  682995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:44:42.877819  682995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:44:42.877840  682995 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:44:42.877966  682995 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:44:42.877993  682995 kube-vip.go:115] generating kube-vip config ...
	I1006 14:44:42.878035  682995 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1006 14:44:42.890886  682995 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:44:42.890995  682995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1006 14:44:42.891046  682995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:44:42.899063  682995 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:44:42.899132  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1006 14:44:42.906926  682995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:44:42.919358  682995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:44:42.934141  682995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:44:42.945961  682995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1006 14:44:42.959489  682995 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1006 14:44:42.962953  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.972760  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:43.053996  682995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:44:43.077665  682995 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:44:43.077692  682995 certs.go:195] generating shared ca certs ...
	I1006 14:44:43.077714  682995 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.077856  682995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:44:43.077899  682995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:44:43.077909  682995 certs.go:257] generating profile certs ...
	I1006 14:44:43.077963  682995 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:44:43.077983  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt with IP's: []
	I1006 14:44:43.259387  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt ...
	I1006 14:44:43.259418  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt: {Name:mk058803c7a7f0f2aa3fb547a3aafbba9518c3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.259607  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key ...
	I1006 14:44:43.259619  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key: {Name:mk0ae3492597f7c1edf0d7262770452fa244a40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.265151  682995 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710
	I1006 14:44:43.265175  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1006 14:44:43.807062  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 ...
	I1006 14:44:43.807095  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710: {Name:mk30dd14f07a4b732bb60853cc2fd5f84f73e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807283  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 ...
	I1006 14:44:43.807298  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710: {Name:mkf3f5fbdf7957143c03cb611320a2e02acb94c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807374  682995 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:44:43.807489  682995 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:44:43.807558  682995 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:44:43.807574  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt with IP's: []
	I1006 14:44:43.994115  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt ...
	I1006 14:44:43.994149  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt: {Name:mk715c6902e25626016d7eb8fdb7b52f0fdce895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994338  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key ...
	I1006 14:44:43.994350  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key: {Name:mka438ddf42b96ca34511dda1ce60f08f1d48b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994429  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:44:43.994449  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:44:43.994460  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:44:43.994470  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:44:43.994480  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:44:43.994490  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:44:43.994510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:44:43.994522  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:44:43.994570  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:44:43.994617  682995 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:44:43.994630  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:44:43.994653  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:44:43.994674  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:44:43.994701  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:44:43.994739  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:43.994772  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:44:43.994786  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:43.994798  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:44:43.995423  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:44:44.014422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:44:44.032422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:44:44.050727  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:44:44.068490  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:44:44.085540  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:44:44.102941  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:44:44.121043  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:44:44.139583  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:44:44.159654  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:44:44.176939  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:44:44.194332  682995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:44:44.207641  682995 ssh_runner.go:195] Run: openssl version
	I1006 14:44:44.214349  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:44:44.223426  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227339  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227401  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.261578  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:44:44.270472  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:44:44.279083  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282749  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282813  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.316484  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:44:44.325228  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:44:44.334098  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.337988  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.338051  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.371914  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:44:44.380847  682995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:44:44.384643  682995 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:44:44.384694  682995 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:44:44.384758  682995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:44:44.384823  682995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:44:44.413083  682995 cri.go:89] found id: ""
	I1006 14:44:44.413145  682995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:44:44.421446  682995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:44:44.429380  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:44:44.429431  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:44:44.437643  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:44:44.437667  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:44:44.437726  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:44:44.445948  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:44:44.446021  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:44:44.453451  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:44:44.460986  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:44:44.461064  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:44:44.468259  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.475830  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:44:44.475882  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.483080  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:44:44.490569  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:44:44.490632  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:44:44.498056  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:44:44.560210  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:44:44.618315  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:48:49.762009  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:48:49.762136  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:48:49.765019  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:49.765065  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:49.765142  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:49.765192  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:49.765263  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:49.765329  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:49.765384  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:49.765424  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:49.765465  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:49.765507  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:49.765557  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:49.765644  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:49.765713  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:49.765816  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:49.765897  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:49.765974  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:49.766033  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:49.768189  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:49.768304  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:49.768391  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:49.768495  682995 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:48:49.768546  682995 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:48:49.768600  682995 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:48:49.768641  682995 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:48:49.768684  682995 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:48:49.768778  682995 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.768847  682995 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:48:49.768982  682995 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.769042  682995 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:48:49.769108  682995 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:48:49.769166  682995 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:48:49.769263  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:49.769339  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:49.769416  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:49.769489  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:49.769549  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:49.769601  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:49.769671  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:49.769753  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:49.771489  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:49.771577  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:49.771664  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:49.771742  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:49.771858  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:49.771974  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:49.772108  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:49.772220  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:49.772288  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:49.772413  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:49.772556  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:49.772647  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501252368s
	I1006 14:48:49.772772  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:49.772891  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:49.772971  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:49.773033  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:48:49.773108  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	I1006 14:48:49.773189  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	I1006 14:48:49.773304  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	I1006 14:48:49.773319  682995 kubeadm.go:318] 
	I1006 14:48:49.773407  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:48:49.773472  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:48:49.773545  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:48:49.773657  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:48:49.773771  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:48:49.773850  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:48:49.773891  682995 kubeadm.go:318] 
	W1006 14:48:49.774048  682995 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501252368s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:48:49.774147  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:48:52.524900  682995 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.75072398s)
	I1006 14:48:52.524985  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:48:52.538104  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:48:52.538173  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:48:52.546610  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:48:52.546639  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:48:52.546692  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:48:52.555271  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:48:52.555334  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:48:52.564502  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:48:52.572861  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:48:52.572925  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:48:52.580681  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.588574  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:48:52.588636  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.596314  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:48:52.604007  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:48:52.604073  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:48:52.611967  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:48:52.650794  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:52.650844  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:52.671446  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:52.671559  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:52.671628  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:52.671718  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:52.671766  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:52.671811  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:52.671850  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:52.671890  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:52.671928  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:52.671972  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:52.672010  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:52.732758  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:52.732876  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:52.732979  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:52.739914  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:52.743428  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:52.743535  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:52.743654  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:52.743727  682995 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:48:52.743777  682995 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:48:52.743861  682995 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:48:52.743911  682995 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:48:52.743985  682995 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:48:52.744055  682995 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:48:52.744143  682995 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:48:52.744228  682995 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:48:52.744266  682995 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:48:52.744323  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:53.107297  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:53.300701  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:53.503166  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:53.664024  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:53.725865  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:53.726293  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:53.728797  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:53.730586  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:53.730720  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:53.730830  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:53.730903  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:53.744534  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:53.744672  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:53.752267  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:53.752422  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:53.752505  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:53.852049  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:53.852226  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:54.353729  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.825241ms
	I1006 14:48:54.356542  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:54.356619  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:54.356695  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:54.356819  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:52:54.358331  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	I1006 14:52:54.358653  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	I1006 14:52:54.358853  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	I1006 14:52:54.358881  682995 kubeadm.go:318] 
	I1006 14:52:54.359059  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:52:54.359298  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:52:54.359539  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:52:54.359760  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:52:54.359952  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:52:54.360116  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:52:54.360148  682995 kubeadm.go:318] 
	I1006 14:52:54.363033  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:52:54.363163  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:52:54.363696  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:52:54.363761  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:52:54.363858  682995 kubeadm.go:402] duration metric: took 8m9.979166519s to StartCluster
	I1006 14:52:54.363946  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:52:54.364031  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:52:54.392579  682995 cri.go:89] found id: ""
	I1006 14:52:54.392622  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.392631  682995 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:52:54.392638  682995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:52:54.392693  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:52:54.420188  682995 cri.go:89] found id: ""
	I1006 14:52:54.420226  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.420237  682995 logs.go:284] No container was found matching "etcd"
	I1006 14:52:54.420245  682995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:52:54.420299  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:52:54.445694  682995 cri.go:89] found id: ""
	I1006 14:52:54.445723  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.445733  682995 logs.go:284] No container was found matching "coredns"
	I1006 14:52:54.445740  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:52:54.445791  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:52:54.471923  682995 cri.go:89] found id: ""
	I1006 14:52:54.471954  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.471962  682995 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:52:54.471971  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:52:54.472030  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:52:54.498805  682995 cri.go:89] found id: ""
	I1006 14:52:54.498836  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.498848  682995 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:52:54.498857  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:52:54.498922  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:52:54.524613  682995 cri.go:89] found id: ""
	I1006 14:52:54.524638  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.524646  682995 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:52:54.524652  682995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:52:54.524708  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:52:54.551140  682995 cri.go:89] found id: ""
	I1006 14:52:54.551170  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.551181  682995 logs.go:284] No container was found matching "kindnet"
	I1006 14:52:54.551194  682995 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:52:54.551220  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:52:54.615573  682995 logs.go:123] Gathering logs for container status ...
	I1006 14:52:54.615607  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:52:54.645703  682995 logs.go:123] Gathering logs for kubelet ...
	I1006 14:52:54.645732  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:52:54.709506  682995 logs.go:123] Gathering logs for dmesg ...
	I1006 14:52:54.709543  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:52:54.722963  682995 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:52:54.722997  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:52:54.783016  682995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1006 14:52:54.783054  682995 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:52:54.783107  682995 out.go:285] * 
	W1006 14:52:54.783182  682995 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.783200  682995 out.go:285] * 
	W1006 14:52:54.785658  682995 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:52:54.789273  682995 out.go:203] 
	W1006 14:52:54.790573  682995 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.790604  682995 out.go:285] * 
	I1006 14:52:54.791821  682995 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246426068Z" level=info msg="createCtr: removing container c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246465998Z" level=info msg="createCtr: deleting container c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485 from storage" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.249858456Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.222474023Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=e2afb2cc-8b95-45ef-839d-d0dd5c34800d name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.22368953Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=401de123-749a-4ebc-8ab1-078ad9c73c34 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.224710592Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-481559/kube-scheduler" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.225088554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.22924992Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.229878709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.249923087Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251834213Z" level=info msg="createCtr: deleting container ID 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368 from idIndex" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251887095Z" level=info msg="createCtr: removing container 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251932195Z" level=info msg="createCtr: deleting container 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368 from storage" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.2573433Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.222451545Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d2a61b85-604a-4a78-b4a0-a6ac7419591f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.223732465Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=756cc7eb-c750-461e-be90-ed96d3fbe167 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.22488018Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-481559/kube-controller-manager" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.225141708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.228812582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.229373513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.246307916Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.247725035Z" level=info msg="createCtr: deleting container ID 5cef11bd3bd8e3ab02e1ecc608a3fdc92d76230ae854ce7d96ffba97b455d556 from idIndex" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.247768884Z" level=info msg="createCtr: removing container 5cef11bd3bd8e3ab02e1ecc608a3fdc92d76230ae854ce7d96ffba97b455d556" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.247812016Z" level=info msg="createCtr: deleting container 5cef11bd3bd8e3ab02e1ecc608a3fdc92d76230ae854ce7d96ffba97b455d556 from storage" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.249966611Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-481559_kube-system_5f3181798721fe8691d871f051785efc_0" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:54:33.013399    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:33.013929    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:33.015522    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:33.016013    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:33.017608    3569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:54:33 up  5:36,  0 user,  load average: 0.34, 0.11, 0.16
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:54:28 ha-481559 kubelet[1985]:  > podSandboxID="a7ce34bebe17bc556bee492a72e0243ebe86fdfcd40a6e28aafa4e286d225bc6"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.250298    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:28 ha-481559 kubelet[1985]:         container etcd start failed in pod etcd-ha-481559_kube-system(520c6060936b1c2aac479c99ed6c0355): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:28 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.250344    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.861462    1985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: E1006 14:54:29.038379    1985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bee56630f6256  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,LastTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: I1006 14:54:29.038950    1985 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: E1006 14:54:29.039334    1985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.221903    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257771    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:30 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:30 ha-481559 kubelet[1985]:  > podSandboxID="28815a6c32deaa458111079bbac61f47b8e22f338f2282fab7d62077c8b07f1e"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257901    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:30 ha-481559 kubelet[1985]:         container kube-scheduler start failed in pod kube-scheduler-ha-481559_kube-system(cc93cb8d89afaa943672c70952b45174): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:30 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257947    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-481559" podUID="cc93cb8d89afaa943672c70952b45174"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.221850    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.250411    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:32 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:32 ha-481559 kubelet[1985]:  > podSandboxID="ed93c32f27ea2f50c71693ae2d5854b0e5ace377e978db1e844e55a1b66c855a"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.250537    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:32 ha-481559 kubelet[1985]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-481559_kube-system(5f3181798721fe8691d871f051785efc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:32 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.250577    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-481559" podUID="5f3181798721fe8691d871f051785efc"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 6 (304.134257ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:33.395497  691349 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (1.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-481559" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-481559\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-481559\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-481559\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-481559" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-481559\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-481559\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-481559\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 683567,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:44:39.660699919Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7effae92997970d320561b0b86c210815b18a55d65bd555e1bff50158ed38adc",
	            "SandboxKey": "/var/run/docker/netns/7effae929979",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32883"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32884"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:f3:45:3f:5b:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "b8540561692606ad815fcdb4502c1e36a16141413d3697f4cf48668502930e4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 6 (287.017048ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:34.013839  691597 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-135520 image ls --format yaml --alsologtostderr                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh pgrep buildkitd                                                                           │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image   │ functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image ls                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ delete  │ -p functional-135520                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │ 06 Oct 25 14:44 UTC │
	│ start   │ ha-481559 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- rollout status deployment/busybox                                                          │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node add --alsologtostderr -v 5                                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:44:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:44:34.230587  682995 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:44:34.230719  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230728  682995 out.go:374] Setting ErrFile to fd 2...
	I1006 14:44:34.230733  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230969  682995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:44:34.231523  682995 out.go:368] Setting JSON to false
	I1006 14:44:34.232538  682995 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19610,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:44:34.232651  682995 start.go:140] virtualization: kvm guest
	I1006 14:44:34.235278  682995 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:44:34.236668  682995 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:44:34.236708  682995 notify.go:220] Checking for updates...
	I1006 14:44:34.239256  682995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:44:34.240475  682995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:44:34.242249  682995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:44:34.243577  682995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:44:34.244737  682995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:44:34.246267  682995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:44:34.271626  682995 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:44:34.271783  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.334697  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.323928193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.334819  682995 docker.go:318] overlay module found
	I1006 14:44:34.336770  682995 out.go:179] * Using the docker driver based on user configuration
	I1006 14:44:34.338109  682995 start.go:304] selected driver: docker
	I1006 14:44:34.338130  682995 start.go:924] validating driver "docker" against <nil>
	I1006 14:44:34.338144  682995 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:44:34.338750  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.398314  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.387376197 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.398587  682995 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:44:34.399080  682995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:44:34.401095  682995 out.go:179] * Using Docker driver with root privileges
	I1006 14:44:34.402283  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:34.402367  682995 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1006 14:44:34.402383  682995 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 14:44:34.402476  682995 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1006 14:44:34.403829  682995 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:44:34.404899  682995 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:44:34.406166  682995 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:44:34.407227  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.407272  682995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:44:34.407284  682995 cache.go:58] Caching tarball of preloaded images
	I1006 14:44:34.407376  682995 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:44:34.407382  682995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:44:34.407387  682995 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:44:34.407757  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:34.407793  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json: {Name:mkefd90ec0b9eae63c82d60bab053cdf7b5d9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:34.429193  682995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:44:34.429233  682995 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:44:34.429254  682995 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:44:34.429296  682995 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:44:34.429397  682995 start.go:364] duration metric: took 84.055µs to acquireMachinesLock for "ha-481559"
	I1006 14:44:34.429421  682995 start.go:93] Provisioning new machine with config: &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:44:34.429503  682995 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:44:34.431456  682995 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 14:44:34.431692  682995 start.go:159] libmachine.API.Create for "ha-481559" (driver="docker")
	I1006 14:44:34.431725  682995 client.go:168] LocalClient.Create starting
	I1006 14:44:34.431791  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem
	I1006 14:44:34.431825  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431843  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.431939  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem
	I1006 14:44:34.431977  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431994  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.432416  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:44:34.449965  682995 cli_runner.go:211] docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:44:34.450053  682995 network_create.go:284] running [docker network inspect ha-481559] to gather additional debugging logs...
	I1006 14:44:34.450071  682995 cli_runner.go:164] Run: docker network inspect ha-481559
	W1006 14:44:34.468682  682995 cli_runner.go:211] docker network inspect ha-481559 returned with exit code 1
	I1006 14:44:34.468713  682995 network_create.go:287] error running [docker network inspect ha-481559]: docker network inspect ha-481559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-481559 not found
	I1006 14:44:34.468724  682995 network_create.go:289] output of [docker network inspect ha-481559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-481559 not found
	
	** /stderr **
	I1006 14:44:34.468902  682995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:34.488223  682995 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca2540}
	I1006 14:44:34.488276  682995 network_create.go:124] attempt to create docker network ha-481559 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:44:34.488338  682995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-481559 ha-481559
	I1006 14:44:34.548630  682995 network_create.go:108] docker network ha-481559 192.168.49.0/24 created
	I1006 14:44:34.548669  682995 kic.go:121] calculated static IP "192.168.49.2" for the "ha-481559" container
	I1006 14:44:34.548729  682995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:44:34.566959  682995 cli_runner.go:164] Run: docker volume create ha-481559 --label name.minikube.sigs.k8s.io=ha-481559 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:44:34.586001  682995 oci.go:103] Successfully created a docker volume ha-481559
	I1006 14:44:34.586088  682995 cli_runner.go:164] Run: docker run --rm --name ha-481559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --entrypoint /usr/bin/test -v ha-481559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:44:34.994169  682995 oci.go:107] Successfully prepared a docker volume ha-481559
	I1006 14:44:34.994233  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.994280  682995 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:44:34.994349  682995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:44:39.551248  682995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.556814521s)
	I1006 14:44:39.551287  682995 kic.go:203] duration metric: took 4.557022471s to extract preloaded images to volume ...
	W1006 14:44:39.551374  682995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1006 14:44:39.551406  682995 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1006 14:44:39.551451  682995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:44:39.608040  682995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-481559 --name ha-481559 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-481559 --network ha-481559 --ip 192.168.49.2 --volume ha-481559:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:44:39.865946  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Running}}
	I1006 14:44:39.883061  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:39.901066  682995 cli_runner.go:164] Run: docker exec ha-481559 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:44:39.951869  682995 oci.go:144] the created container "ha-481559" has a running status.
	I1006 14:44:39.951908  682995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa...
	I1006 14:44:40.176341  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 14:44:40.176392  682995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:44:40.205643  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.227924  682995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:44:40.227948  682995 kic_runner.go:114] Args: [docker exec --privileged ha-481559 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:44:40.277808  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.297063  682995 machine.go:93] provisionDockerMachine start ...
	I1006 14:44:40.297156  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.315828  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.316109  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.316124  682995 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:44:40.461735  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.461771  682995 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:44:40.461843  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.481222  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.481551  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.481575  682995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:44:40.636624  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.636709  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.655017  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.655283  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.655302  682995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:44:40.801276  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:44:40.801313  682995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:44:40.801332  682995 ubuntu.go:190] setting up certificates
	I1006 14:44:40.801344  682995 provision.go:84] configureAuth start
	I1006 14:44:40.801398  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:40.819000  682995 provision.go:143] copyHostCerts
	I1006 14:44:40.819052  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819089  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:44:40.819099  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819169  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:44:40.819281  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819304  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:44:40.819309  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819338  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:44:40.819400  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819416  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:44:40.819428  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819460  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:44:40.819525  682995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:44:40.896257  682995 provision.go:177] copyRemoteCerts
	I1006 14:44:40.896328  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:44:40.896370  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.914092  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.016898  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:44:41.016969  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:44:41.037131  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:44:41.037215  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:44:41.055180  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:44:41.055258  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:44:41.073045  682995 provision.go:87] duration metric: took 271.684433ms to configureAuth
	I1006 14:44:41.073074  682995 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:44:41.073312  682995 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:44:41.073456  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.092548  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:41.092838  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:41.092869  682995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:44:41.356221  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:44:41.356247  682995 machine.go:96] duration metric: took 1.059160507s to provisionDockerMachine
	I1006 14:44:41.356259  682995 client.go:171] duration metric: took 6.924524382s to LocalClient.Create
	I1006 14:44:41.356282  682995 start.go:167] duration metric: took 6.924591304s to libmachine.API.Create "ha-481559"
	I1006 14:44:41.356295  682995 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:44:41.356322  682995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:44:41.356396  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:44:41.356453  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.374424  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.479545  682995 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:44:41.483318  682995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:44:41.483345  682995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:44:41.483356  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:44:41.483402  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:44:41.483499  682995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:44:41.483510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:44:41.483603  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:44:41.491409  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:41.511609  682995 start.go:296] duration metric: took 155.29938ms for postStartSetup
	I1006 14:44:41.511914  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.529867  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:41.530158  682995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:44:41.530223  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.547995  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.647810  682995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:44:41.652637  682995 start.go:128] duration metric: took 7.223117194s to createHost
	I1006 14:44:41.652662  682995 start.go:83] releasing machines lock for "ha-481559", held for 7.223254897s
	I1006 14:44:41.652730  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.670486  682995 ssh_runner.go:195] Run: cat /version.json
	I1006 14:44:41.670511  682995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:44:41.670555  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.670581  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.689278  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.689801  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.845142  682995 ssh_runner.go:195] Run: systemctl --version
	I1006 14:44:41.852333  682995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:44:41.886799  682995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:44:41.891575  682995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:44:41.891645  682995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:44:41.918020  682995 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 14:44:41.918049  682995 start.go:495] detecting cgroup driver to use...
	I1006 14:44:41.918088  682995 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:44:41.918148  682995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:44:41.934827  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:44:41.946573  682995 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:44:41.946626  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:44:41.961811  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:44:41.978333  682995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:44:42.056893  682995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:44:42.140645  682995 docker.go:234] disabling docker service ...
	I1006 14:44:42.140713  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:44:42.159372  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:44:42.171857  682995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:44:42.255908  682995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:44:42.340081  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:44:42.352916  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:44:42.367142  682995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:44:42.367215  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.377866  682995 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:44:42.377939  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.387157  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.395944  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.404768  682995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:44:42.412712  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.420910  682995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.434108  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.442895  682995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:44:42.450289  682995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:44:42.457667  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:42.535385  682995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:44:42.643348  682995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:44:42.643424  682995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:44:42.647404  682995 start.go:563] Will wait 60s for crictl version
	I1006 14:44:42.647467  682995 ssh_runner.go:195] Run: which crictl
	I1006 14:44:42.651000  682995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:44:42.675962  682995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:44:42.676044  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.705541  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.736773  682995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:44:42.738090  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:42.754892  682995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:44:42.759274  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.770415  682995 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:44:42.770534  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:42.770581  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.805187  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.805221  682995 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:44:42.805274  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.831096  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.831123  682995 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:44:42.831132  682995 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:44:42.831244  682995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:44:42.831321  682995 ssh_runner.go:195] Run: crio config
	I1006 14:44:42.877768  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:42.877790  682995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:44:42.877819  682995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:44:42.877840  682995 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:44:42.877966  682995 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:44:42.877993  682995 kube-vip.go:115] generating kube-vip config ...
	I1006 14:44:42.878035  682995 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1006 14:44:42.890886  682995 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:44:42.890995  682995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1006 14:44:42.891046  682995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:44:42.899063  682995 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:44:42.899132  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1006 14:44:42.906926  682995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:44:42.919358  682995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:44:42.934141  682995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:44:42.945961  682995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1006 14:44:42.959489  682995 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1006 14:44:42.962953  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.972760  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:43.053996  682995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:44:43.077665  682995 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:44:43.077692  682995 certs.go:195] generating shared ca certs ...
	I1006 14:44:43.077714  682995 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.077856  682995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:44:43.077899  682995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:44:43.077909  682995 certs.go:257] generating profile certs ...
	I1006 14:44:43.077963  682995 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:44:43.077983  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt with IP's: []
	I1006 14:44:43.259387  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt ...
	I1006 14:44:43.259418  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt: {Name:mk058803c7a7f0f2aa3fb547a3aafbba9518c3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.259607  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key ...
	I1006 14:44:43.259619  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key: {Name:mk0ae3492597f7c1edf0d7262770452fa244a40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.265151  682995 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710
	I1006 14:44:43.265175  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1006 14:44:43.807062  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 ...
	I1006 14:44:43.807095  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710: {Name:mk30dd14f07a4b732bb60853cc2fd5f84f73e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807283  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 ...
	I1006 14:44:43.807298  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710: {Name:mkf3f5fbdf7957143c03cb611320a2e02acb94c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807374  682995 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:44:43.807489  682995 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:44:43.807558  682995 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:44:43.807574  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt with IP's: []
	I1006 14:44:43.994115  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt ...
	I1006 14:44:43.994149  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt: {Name:mk715c6902e25626016d7eb8fdb7b52f0fdce895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994338  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key ...
	I1006 14:44:43.994350  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key: {Name:mka438ddf42b96ca34511dda1ce60f08f1d48b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994429  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:44:43.994449  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:44:43.994460  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:44:43.994470  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:44:43.994480  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:44:43.994490  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:44:43.994510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:44:43.994522  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:44:43.994570  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:44:43.994617  682995 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:44:43.994630  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:44:43.994653  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:44:43.994674  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:44:43.994701  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:44:43.994739  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:43.994772  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:44:43.994786  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:43.994798  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:44:43.995423  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:44:44.014422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:44:44.032422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:44:44.050727  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:44:44.068490  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:44:44.085540  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:44:44.102941  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:44:44.121043  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:44:44.139583  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:44:44.159654  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:44:44.176939  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:44:44.194332  682995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:44:44.207641  682995 ssh_runner.go:195] Run: openssl version
	I1006 14:44:44.214349  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:44:44.223426  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227339  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227401  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.261578  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:44:44.270472  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:44:44.279083  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282749  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282813  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.316484  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:44:44.325228  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:44:44.334098  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.337988  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.338051  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.371914  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:44:44.380847  682995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:44:44.384643  682995 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:44:44.384694  682995 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:44:44.384758  682995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:44:44.384823  682995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:44:44.413083  682995 cri.go:89] found id: ""
	I1006 14:44:44.413145  682995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:44:44.421446  682995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:44:44.429380  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:44:44.429431  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:44:44.437643  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:44:44.437667  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:44:44.437726  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:44:44.445948  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:44:44.446021  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:44:44.453451  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:44:44.460986  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:44:44.461064  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:44:44.468259  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.475830  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:44:44.475882  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.483080  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:44:44.490569  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:44:44.490632  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:44:44.498056  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:44:44.560210  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:44:44.618315  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:48:49.762009  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:48:49.762136  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:48:49.765019  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:49.765065  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:49.765142  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:49.765192  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:49.765263  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:49.765329  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:49.765384  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:49.765424  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:49.765465  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:49.765507  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:49.765557  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:49.765644  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:49.765713  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:49.765816  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:49.765897  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:49.765974  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:49.766033  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:49.768189  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:49.768304  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:49.768391  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:49.768495  682995 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:48:49.768546  682995 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:48:49.768600  682995 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:48:49.768641  682995 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:48:49.768684  682995 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:48:49.768778  682995 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.768847  682995 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:48:49.768982  682995 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.769042  682995 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:48:49.769108  682995 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:48:49.769166  682995 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:48:49.769263  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:49.769339  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:49.769416  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:49.769489  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:49.769549  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:49.769601  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:49.769671  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:49.769753  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:49.771489  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:49.771577  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:49.771664  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:49.771742  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:49.771858  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:49.771974  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:49.772108  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:49.772220  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:49.772288  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:49.772413  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:49.772556  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:49.772647  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501252368s
	I1006 14:48:49.772772  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:49.772891  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:49.772971  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:49.773033  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:48:49.773108  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	I1006 14:48:49.773189  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	I1006 14:48:49.773304  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	I1006 14:48:49.773319  682995 kubeadm.go:318] 
	I1006 14:48:49.773407  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:48:49.773472  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:48:49.773545  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:48:49.773657  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:48:49.773771  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:48:49.773850  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:48:49.773891  682995 kubeadm.go:318] 
	W1006 14:48:49.774048  682995 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501252368s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:48:49.774147  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:48:52.524900  682995 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.75072398s)
	I1006 14:48:52.524985  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:48:52.538104  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:48:52.538173  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:48:52.546610  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:48:52.546639  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:48:52.546692  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:48:52.555271  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:48:52.555334  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:48:52.564502  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:48:52.572861  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:48:52.572925  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:48:52.580681  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.588574  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:48:52.588636  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.596314  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:48:52.604007  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:48:52.604073  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:48:52.611967  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:48:52.650794  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:52.650844  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:52.671446  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:52.671559  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:52.671628  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:52.671718  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:52.671766  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:52.671811  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:52.671850  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:52.671890  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:52.671928  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:52.671972  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:52.672010  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:52.732758  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:52.732876  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:52.732979  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:52.739914  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:52.743428  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:52.743535  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:52.743654  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:52.743727  682995 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:48:52.743777  682995 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:48:52.743861  682995 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:48:52.743911  682995 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:48:52.743985  682995 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:48:52.744055  682995 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:48:52.744143  682995 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:48:52.744228  682995 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:48:52.744266  682995 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:48:52.744323  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:53.107297  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:53.300701  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:53.503166  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:53.664024  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:53.725865  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:53.726293  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:53.728797  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:53.730586  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:53.730720  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:53.730830  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:53.730903  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:53.744534  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:53.744672  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:53.752267  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:53.752422  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:53.752505  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:53.852049  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:53.852226  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:54.353729  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.825241ms
	I1006 14:48:54.356542  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:54.356619  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:54.356695  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:54.356819  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:52:54.358331  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	I1006 14:52:54.358653  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	I1006 14:52:54.358853  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	I1006 14:52:54.358881  682995 kubeadm.go:318] 
	I1006 14:52:54.359059  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:52:54.359298  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:52:54.359539  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:52:54.359760  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:52:54.359952  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:52:54.360116  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:52:54.360148  682995 kubeadm.go:318] 
	I1006 14:52:54.363033  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:52:54.363163  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:52:54.363696  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:52:54.363761  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:52:54.363858  682995 kubeadm.go:402] duration metric: took 8m9.979166519s to StartCluster
	I1006 14:52:54.363946  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:52:54.364031  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:52:54.392579  682995 cri.go:89] found id: ""
	I1006 14:52:54.392622  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.392631  682995 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:52:54.392638  682995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:52:54.392693  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:52:54.420188  682995 cri.go:89] found id: ""
	I1006 14:52:54.420226  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.420237  682995 logs.go:284] No container was found matching "etcd"
	I1006 14:52:54.420245  682995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:52:54.420299  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:52:54.445694  682995 cri.go:89] found id: ""
	I1006 14:52:54.445723  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.445733  682995 logs.go:284] No container was found matching "coredns"
	I1006 14:52:54.445740  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:52:54.445791  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:52:54.471923  682995 cri.go:89] found id: ""
	I1006 14:52:54.471954  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.471962  682995 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:52:54.471971  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:52:54.472030  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:52:54.498805  682995 cri.go:89] found id: ""
	I1006 14:52:54.498836  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.498848  682995 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:52:54.498857  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:52:54.498922  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:52:54.524613  682995 cri.go:89] found id: ""
	I1006 14:52:54.524638  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.524646  682995 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:52:54.524652  682995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:52:54.524708  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:52:54.551140  682995 cri.go:89] found id: ""
	I1006 14:52:54.551170  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.551181  682995 logs.go:284] No container was found matching "kindnet"
	I1006 14:52:54.551194  682995 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:52:54.551220  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:52:54.615573  682995 logs.go:123] Gathering logs for container status ...
	I1006 14:52:54.615607  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:52:54.645703  682995 logs.go:123] Gathering logs for kubelet ...
	I1006 14:52:54.645732  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:52:54.709506  682995 logs.go:123] Gathering logs for dmesg ...
	I1006 14:52:54.709543  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:52:54.722963  682995 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:52:54.722997  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:52:54.783016  682995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1006 14:52:54.783054  682995 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:52:54.783107  682995 out.go:285] * 
	W1006 14:52:54.783182  682995 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.783200  682995 out.go:285] * 
	W1006 14:52:54.785658  682995 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:52:54.789273  682995 out.go:203] 
	W1006 14:52:54.790573  682995 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.790604  682995 out.go:285] * 
	I1006 14:52:54.791821  682995 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246426068Z" level=info msg="createCtr: removing container c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246465998Z" level=info msg="createCtr: deleting container c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485 from storage" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.249858456Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.222474023Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=e2afb2cc-8b95-45ef-839d-d0dd5c34800d name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.22368953Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=401de123-749a-4ebc-8ab1-078ad9c73c34 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.224710592Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-481559/kube-scheduler" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.225088554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.22924992Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.229878709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.249923087Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251834213Z" level=info msg="createCtr: deleting container ID 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368 from idIndex" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251887095Z" level=info msg="createCtr: removing container 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251932195Z" level=info msg="createCtr: deleting container 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368 from storage" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.2573433Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.222451545Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d2a61b85-604a-4a78-b4a0-a6ac7419591f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.223732465Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=756cc7eb-c750-461e-be90-ed96d3fbe167 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.22488018Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-481559/kube-controller-manager" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.225141708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.228812582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.229373513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.246307916Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.247725035Z" level=info msg="createCtr: deleting container ID 5cef11bd3bd8e3ab02e1ecc608a3fdc92d76230ae854ce7d96ffba97b455d556 from idIndex" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.247768884Z" level=info msg="createCtr: removing container 5cef11bd3bd8e3ab02e1ecc608a3fdc92d76230ae854ce7d96ffba97b455d556" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.247812016Z" level=info msg="createCtr: deleting container 5cef11bd3bd8e3ab02e1ecc608a3fdc92d76230ae854ce7d96ffba97b455d556 from storage" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.249966611Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-481559_kube-system_5f3181798721fe8691d871f051785efc_0" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:54:34.595674    3739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:34.596338    3739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:34.598022    3739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:34.598555    3739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:34.600217    3739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:54:34 up  5:36,  0 user,  load average: 0.31, 0.11, 0.16
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.250298    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:28 ha-481559 kubelet[1985]:         container etcd start failed in pod etcd-ha-481559_kube-system(520c6060936b1c2aac479c99ed6c0355): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:28 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.250344    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.861462    1985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: E1006 14:54:29.038379    1985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bee56630f6256  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,LastTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: I1006 14:54:29.038950    1985 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: E1006 14:54:29.039334    1985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.221903    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257771    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:30 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:30 ha-481559 kubelet[1985]:  > podSandboxID="28815a6c32deaa458111079bbac61f47b8e22f338f2282fab7d62077c8b07f1e"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257901    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:30 ha-481559 kubelet[1985]:         container kube-scheduler start failed in pod kube-scheduler-ha-481559_kube-system(cc93cb8d89afaa943672c70952b45174): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:30 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257947    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-481559" podUID="cc93cb8d89afaa943672c70952b45174"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.221850    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.250411    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:32 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:32 ha-481559 kubelet[1985]:  > podSandboxID="ed93c32f27ea2f50c71693ae2d5854b0e5ace377e978db1e844e55a1b66c855a"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.250537    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:32 ha-481559 kubelet[1985]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-481559_kube-system(5f3181798721fe8691d871f051785efc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:32 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.250577    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-481559" podUID="5f3181798721fe8691d871f051785efc"
	Oct 06 14:54:34 ha-481559 kubelet[1985]: E1006 14:54:34.245614    1985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 6 (304.803134ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:34.973907  691924 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 status --output json --alsologtostderr -v 5: exit status 6 (288.964382ms)

                                                
                                                
-- stdout --
	{"Name":"ha-481559","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:54:35.033227  692039 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:54:35.033471  692039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:35.033480  692039 out.go:374] Setting ErrFile to fd 2...
	I1006 14:54:35.033484  692039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:35.033690  692039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:54:35.033839  692039 out.go:368] Setting JSON to true
	I1006 14:54:35.033867  692039 mustload.go:65] Loading cluster: ha-481559
	I1006 14:54:35.033985  692039 notify.go:220] Checking for updates...
	I1006 14:54:35.034165  692039 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:54:35.034177  692039 status.go:174] checking status of ha-481559 ...
	I1006 14:54:35.034618  692039 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:54:35.053681  692039 status.go:371] ha-481559 host status = "Running" (err=<nil>)
	I1006 14:54:35.053719  692039 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:35.054041  692039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:54:35.070452  692039 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:35.070681  692039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:54:35.070732  692039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:54:35.087615  692039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:54:35.186569  692039 ssh_runner.go:195] Run: systemctl --version
	I1006 14:54:35.192833  692039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:54:35.205224  692039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:54:35.263443  692039 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:54:35.252583152 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1006 14:54:35.263853  692039 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:54:35.263883  692039 api_server.go:166] Checking apiserver status ...
	I1006 14:54:35.263914  692039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 14:54:35.274329  692039 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:54:35.274349  692039 status.go:463] ha-481559 apiserver status = Running (err=<nil>)
	I1006 14:54:35.274360  692039 status.go:176] ha-481559 status: &{Name:ha-481559 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:330: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-481559 status --output json --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 683567,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:44:39.660699919Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7effae92997970d320561b0b86c210815b18a55d65bd555e1bff50158ed38adc",
	            "SandboxKey": "/var/run/docker/netns/7effae929979",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32883"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32884"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:f3:45:3f:5b:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "b8540561692606ad815fcdb4502c1e36a16141413d3697f4cf48668502930e4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 6 (295.341546ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:35.578735  692167 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-135520 image ls --format yaml --alsologtostderr                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ ssh     │ functional-135520 ssh pgrep buildkitd                                                                           │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image   │ functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image ls                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ delete  │ -p functional-135520                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │ 06 Oct 25 14:44 UTC │
	│ start   │ ha-481559 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- rollout status deployment/busybox                                                          │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node add --alsologtostderr -v 5                                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:44:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:44:34.230587  682995 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:44:34.230719  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230728  682995 out.go:374] Setting ErrFile to fd 2...
	I1006 14:44:34.230733  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230969  682995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:44:34.231523  682995 out.go:368] Setting JSON to false
	I1006 14:44:34.232538  682995 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19610,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:44:34.232651  682995 start.go:140] virtualization: kvm guest
	I1006 14:44:34.235278  682995 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:44:34.236668  682995 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:44:34.236708  682995 notify.go:220] Checking for updates...
	I1006 14:44:34.239256  682995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:44:34.240475  682995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:44:34.242249  682995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:44:34.243577  682995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:44:34.244737  682995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:44:34.246267  682995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:44:34.271626  682995 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:44:34.271783  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.334697  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.323928193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.334819  682995 docker.go:318] overlay module found
	I1006 14:44:34.336770  682995 out.go:179] * Using the docker driver based on user configuration
	I1006 14:44:34.338109  682995 start.go:304] selected driver: docker
	I1006 14:44:34.338130  682995 start.go:924] validating driver "docker" against <nil>
	I1006 14:44:34.338144  682995 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:44:34.338750  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.398314  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.387376197 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.398587  682995 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:44:34.399080  682995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:44:34.401095  682995 out.go:179] * Using Docker driver with root privileges
	I1006 14:44:34.402283  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:34.402367  682995 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1006 14:44:34.402383  682995 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 14:44:34.402476  682995 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1006 14:44:34.403829  682995 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:44:34.404899  682995 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:44:34.406166  682995 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:44:34.407227  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.407272  682995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:44:34.407284  682995 cache.go:58] Caching tarball of preloaded images
	I1006 14:44:34.407376  682995 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:44:34.407382  682995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:44:34.407387  682995 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:44:34.407757  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:34.407793  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json: {Name:mkefd90ec0b9eae63c82d60bab053cdf7b5d9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:34.429193  682995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:44:34.429233  682995 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:44:34.429254  682995 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:44:34.429296  682995 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:44:34.429397  682995 start.go:364] duration metric: took 84.055µs to acquireMachinesLock for "ha-481559"
	I1006 14:44:34.429421  682995 start.go:93] Provisioning new machine with config: &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:44:34.429503  682995 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:44:34.431456  682995 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 14:44:34.431692  682995 start.go:159] libmachine.API.Create for "ha-481559" (driver="docker")
	I1006 14:44:34.431725  682995 client.go:168] LocalClient.Create starting
	I1006 14:44:34.431791  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem
	I1006 14:44:34.431825  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431843  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.431939  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem
	I1006 14:44:34.431977  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431994  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.432416  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:44:34.449965  682995 cli_runner.go:211] docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:44:34.450053  682995 network_create.go:284] running [docker network inspect ha-481559] to gather additional debugging logs...
	I1006 14:44:34.450071  682995 cli_runner.go:164] Run: docker network inspect ha-481559
	W1006 14:44:34.468682  682995 cli_runner.go:211] docker network inspect ha-481559 returned with exit code 1
	I1006 14:44:34.468713  682995 network_create.go:287] error running [docker network inspect ha-481559]: docker network inspect ha-481559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-481559 not found
	I1006 14:44:34.468724  682995 network_create.go:289] output of [docker network inspect ha-481559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-481559 not found
	
	** /stderr **
	I1006 14:44:34.468902  682995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:34.488223  682995 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca2540}
	I1006 14:44:34.488276  682995 network_create.go:124] attempt to create docker network ha-481559 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:44:34.488338  682995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-481559 ha-481559
	I1006 14:44:34.548630  682995 network_create.go:108] docker network ha-481559 192.168.49.0/24 created
	I1006 14:44:34.548669  682995 kic.go:121] calculated static IP "192.168.49.2" for the "ha-481559" container
	I1006 14:44:34.548729  682995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:44:34.566959  682995 cli_runner.go:164] Run: docker volume create ha-481559 --label name.minikube.sigs.k8s.io=ha-481559 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:44:34.586001  682995 oci.go:103] Successfully created a docker volume ha-481559
	I1006 14:44:34.586088  682995 cli_runner.go:164] Run: docker run --rm --name ha-481559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --entrypoint /usr/bin/test -v ha-481559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:44:34.994169  682995 oci.go:107] Successfully prepared a docker volume ha-481559
	I1006 14:44:34.994233  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.994280  682995 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:44:34.994349  682995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:44:39.551248  682995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.556814521s)
	I1006 14:44:39.551287  682995 kic.go:203] duration metric: took 4.557022471s to extract preloaded images to volume ...
	W1006 14:44:39.551374  682995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1006 14:44:39.551406  682995 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1006 14:44:39.551451  682995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:44:39.608040  682995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-481559 --name ha-481559 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-481559 --network ha-481559 --ip 192.168.49.2 --volume ha-481559:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:44:39.865946  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Running}}
	I1006 14:44:39.883061  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:39.901066  682995 cli_runner.go:164] Run: docker exec ha-481559 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:44:39.951869  682995 oci.go:144] the created container "ha-481559" has a running status.
	I1006 14:44:39.951908  682995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa...
	I1006 14:44:40.176341  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 14:44:40.176392  682995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:44:40.205643  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.227924  682995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:44:40.227948  682995 kic_runner.go:114] Args: [docker exec --privileged ha-481559 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:44:40.277808  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.297063  682995 machine.go:93] provisionDockerMachine start ...
	I1006 14:44:40.297156  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.315828  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.316109  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.316124  682995 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:44:40.461735  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.461771  682995 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:44:40.461843  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.481222  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.481551  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.481575  682995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:44:40.636624  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.636709  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.655017  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.655283  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.655302  682995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:44:40.801276  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:44:40.801313  682995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:44:40.801332  682995 ubuntu.go:190] setting up certificates
	I1006 14:44:40.801344  682995 provision.go:84] configureAuth start
	I1006 14:44:40.801398  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:40.819000  682995 provision.go:143] copyHostCerts
	I1006 14:44:40.819052  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819089  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:44:40.819099  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819169  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:44:40.819281  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819304  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:44:40.819309  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819338  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:44:40.819400  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819416  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:44:40.819428  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819460  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:44:40.819525  682995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:44:40.896257  682995 provision.go:177] copyRemoteCerts
	I1006 14:44:40.896328  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:44:40.896370  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.914092  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.016898  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:44:41.016969  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:44:41.037131  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:44:41.037215  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:44:41.055180  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:44:41.055258  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:44:41.073045  682995 provision.go:87] duration metric: took 271.684433ms to configureAuth
	I1006 14:44:41.073074  682995 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:44:41.073312  682995 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:44:41.073456  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.092548  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:41.092838  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:41.092869  682995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:44:41.356221  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:44:41.356247  682995 machine.go:96] duration metric: took 1.059160507s to provisionDockerMachine
	I1006 14:44:41.356259  682995 client.go:171] duration metric: took 6.924524382s to LocalClient.Create
	I1006 14:44:41.356282  682995 start.go:167] duration metric: took 6.924591304s to libmachine.API.Create "ha-481559"
	I1006 14:44:41.356295  682995 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:44:41.356322  682995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:44:41.356396  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:44:41.356453  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.374424  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.479545  682995 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:44:41.483318  682995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:44:41.483345  682995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:44:41.483356  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:44:41.483402  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:44:41.483499  682995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:44:41.483510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:44:41.483603  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:44:41.491409  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:41.511609  682995 start.go:296] duration metric: took 155.29938ms for postStartSetup
	I1006 14:44:41.511914  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.529867  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:41.530158  682995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:44:41.530223  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.547995  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.647810  682995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:44:41.652637  682995 start.go:128] duration metric: took 7.223117194s to createHost
	I1006 14:44:41.652662  682995 start.go:83] releasing machines lock for "ha-481559", held for 7.223254897s
	I1006 14:44:41.652730  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.670486  682995 ssh_runner.go:195] Run: cat /version.json
	I1006 14:44:41.670511  682995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:44:41.670555  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.670581  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.689278  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.689801  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.845142  682995 ssh_runner.go:195] Run: systemctl --version
	I1006 14:44:41.852333  682995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:44:41.886799  682995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:44:41.891575  682995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:44:41.891645  682995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:44:41.918020  682995 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 14:44:41.918049  682995 start.go:495] detecting cgroup driver to use...
	I1006 14:44:41.918088  682995 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:44:41.918148  682995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:44:41.934827  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:44:41.946573  682995 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:44:41.946626  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:44:41.961811  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:44:41.978333  682995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:44:42.056893  682995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:44:42.140645  682995 docker.go:234] disabling docker service ...
	I1006 14:44:42.140713  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:44:42.159372  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:44:42.171857  682995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:44:42.255908  682995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:44:42.340081  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:44:42.352916  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:44:42.367142  682995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:44:42.367215  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.377866  682995 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:44:42.377939  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.387157  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.395944  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.404768  682995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:44:42.412712  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.420910  682995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.434108  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.442895  682995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:44:42.450289  682995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:44:42.457667  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:42.535385  682995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:44:42.643348  682995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:44:42.643424  682995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:44:42.647404  682995 start.go:563] Will wait 60s for crictl version
	I1006 14:44:42.647467  682995 ssh_runner.go:195] Run: which crictl
	I1006 14:44:42.651000  682995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:44:42.675962  682995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:44:42.676044  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.705541  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.736773  682995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:44:42.738090  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:42.754892  682995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:44:42.759274  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.770415  682995 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:44:42.770534  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:42.770581  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.805187  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.805221  682995 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:44:42.805274  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.831096  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.831123  682995 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:44:42.831132  682995 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:44:42.831244  682995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:44:42.831321  682995 ssh_runner.go:195] Run: crio config
	I1006 14:44:42.877768  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:42.877790  682995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:44:42.877819  682995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:44:42.877840  682995 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:44:42.877966  682995 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:44:42.877993  682995 kube-vip.go:115] generating kube-vip config ...
	I1006 14:44:42.878035  682995 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1006 14:44:42.890886  682995 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:44:42.890995  682995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1006 14:44:42.891046  682995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:44:42.899063  682995 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:44:42.899132  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1006 14:44:42.906926  682995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:44:42.919358  682995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:44:42.934141  682995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:44:42.945961  682995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1006 14:44:42.959489  682995 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1006 14:44:42.962953  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.972760  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:43.053996  682995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:44:43.077665  682995 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:44:43.077692  682995 certs.go:195] generating shared ca certs ...
	I1006 14:44:43.077714  682995 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.077856  682995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:44:43.077899  682995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:44:43.077909  682995 certs.go:257] generating profile certs ...
	I1006 14:44:43.077963  682995 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:44:43.077983  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt with IP's: []
	I1006 14:44:43.259387  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt ...
	I1006 14:44:43.259418  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt: {Name:mk058803c7a7f0f2aa3fb547a3aafbba9518c3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.259607  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key ...
	I1006 14:44:43.259619  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key: {Name:mk0ae3492597f7c1edf0d7262770452fa244a40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.265151  682995 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710
	I1006 14:44:43.265175  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1006 14:44:43.807062  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 ...
	I1006 14:44:43.807095  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710: {Name:mk30dd14f07a4b732bb60853cc2fd5f84f73e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807283  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 ...
	I1006 14:44:43.807298  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710: {Name:mkf3f5fbdf7957143c03cb611320a2e02acb94c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807374  682995 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:44:43.807489  682995 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:44:43.807558  682995 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:44:43.807574  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt with IP's: []
	I1006 14:44:43.994115  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt ...
	I1006 14:44:43.994149  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt: {Name:mk715c6902e25626016d7eb8fdb7b52f0fdce895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994338  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key ...
	I1006 14:44:43.994350  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key: {Name:mka438ddf42b96ca34511dda1ce60f08f1d48b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994429  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:44:43.994449  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:44:43.994460  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:44:43.994470  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:44:43.994480  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:44:43.994490  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:44:43.994510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:44:43.994522  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:44:43.994570  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:44:43.994617  682995 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:44:43.994630  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:44:43.994653  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:44:43.994674  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:44:43.994701  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:44:43.994739  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:43.994772  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:44:43.994786  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:43.994798  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:44:43.995423  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:44:44.014422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:44:44.032422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:44:44.050727  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:44:44.068490  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:44:44.085540  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:44:44.102941  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:44:44.121043  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:44:44.139583  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:44:44.159654  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:44:44.176939  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:44:44.194332  682995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:44:44.207641  682995 ssh_runner.go:195] Run: openssl version
	I1006 14:44:44.214349  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:44:44.223426  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227339  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227401  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.261578  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:44:44.270472  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:44:44.279083  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282749  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282813  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.316484  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:44:44.325228  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:44:44.334098  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.337988  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.338051  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.371914  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:44:44.380847  682995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:44:44.384643  682995 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:44:44.384694  682995 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:44:44.384758  682995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:44:44.384823  682995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:44:44.413083  682995 cri.go:89] found id: ""
	I1006 14:44:44.413145  682995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:44:44.421446  682995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:44:44.429380  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:44:44.429431  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:44:44.437643  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:44:44.437667  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:44:44.437726  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:44:44.445948  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:44:44.446021  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:44:44.453451  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:44:44.460986  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:44:44.461064  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:44:44.468259  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.475830  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:44:44.475882  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.483080  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:44:44.490569  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:44:44.490632  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:44:44.498056  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:44:44.560210  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:44:44.618315  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:48:49.762009  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:48:49.762136  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:48:49.765019  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:49.765065  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:49.765142  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:49.765192  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:49.765263  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:49.765329  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:49.765384  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:49.765424  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:49.765465  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:49.765507  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:49.765557  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:49.765644  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:49.765713  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:49.765816  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:49.765897  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:49.765974  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:49.766033  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:49.768189  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:49.768304  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:49.768391  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:49.768495  682995 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:48:49.768546  682995 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:48:49.768600  682995 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:48:49.768641  682995 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:48:49.768684  682995 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:48:49.768778  682995 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.768847  682995 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:48:49.768982  682995 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.769042  682995 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:48:49.769108  682995 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:48:49.769166  682995 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:48:49.769263  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:49.769339  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:49.769416  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:49.769489  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:49.769549  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:49.769601  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:49.769671  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:49.769753  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:49.771489  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:49.771577  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:49.771664  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:49.771742  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:49.771858  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:49.771974  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:49.772108  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:49.772220  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:49.772288  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:49.772413  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:49.772556  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:49.772647  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501252368s
	I1006 14:48:49.772772  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:49.772891  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:49.772971  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:49.773033  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:48:49.773108  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	I1006 14:48:49.773189  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	I1006 14:48:49.773304  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	I1006 14:48:49.773319  682995 kubeadm.go:318] 
	I1006 14:48:49.773407  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:48:49.773472  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:48:49.773545  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:48:49.773657  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:48:49.773771  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:48:49.773850  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:48:49.773891  682995 kubeadm.go:318] 
	W1006 14:48:49.774048  682995 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501252368s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:48:49.774147  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:48:52.524900  682995 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.75072398s)
	I1006 14:48:52.524985  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:48:52.538104  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:48:52.538173  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:48:52.546610  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:48:52.546639  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:48:52.546692  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:48:52.555271  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:48:52.555334  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:48:52.564502  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:48:52.572861  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:48:52.572925  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:48:52.580681  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.588574  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:48:52.588636  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.596314  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:48:52.604007  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:48:52.604073  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:48:52.611967  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:48:52.650794  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:52.650844  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:52.671446  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:52.671559  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:52.671628  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:52.671718  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:52.671766  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:52.671811  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:52.671850  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:52.671890  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:52.671928  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:52.671972  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:52.672010  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:52.732758  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:52.732876  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:52.732979  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:52.739914  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:52.743428  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:52.743535  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:52.743654  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:52.743727  682995 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:48:52.743777  682995 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:48:52.743861  682995 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:48:52.743911  682995 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:48:52.743985  682995 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:48:52.744055  682995 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:48:52.744143  682995 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:48:52.744228  682995 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:48:52.744266  682995 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:48:52.744323  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:53.107297  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:53.300701  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:53.503166  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:53.664024  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:53.725865  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:53.726293  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:53.728797  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:53.730586  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:53.730720  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:53.730830  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:53.730903  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:53.744534  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:53.744672  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:53.752267  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:53.752422  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:53.752505  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:53.852049  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:53.852226  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:54.353729  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.825241ms
	I1006 14:48:54.356542  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:54.356619  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:54.356695  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:54.356819  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:52:54.358331  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	I1006 14:52:54.358653  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	I1006 14:52:54.358853  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	I1006 14:52:54.358881  682995 kubeadm.go:318] 
	I1006 14:52:54.359059  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:52:54.359298  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:52:54.359539  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:52:54.359760  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:52:54.359952  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:52:54.360116  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:52:54.360148  682995 kubeadm.go:318] 
	I1006 14:52:54.363033  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:52:54.363163  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:52:54.363696  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:52:54.363761  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:52:54.363858  682995 kubeadm.go:402] duration metric: took 8m9.979166519s to StartCluster
	I1006 14:52:54.363946  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:52:54.364031  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:52:54.392579  682995 cri.go:89] found id: ""
	I1006 14:52:54.392622  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.392631  682995 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:52:54.392638  682995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:52:54.392693  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:52:54.420188  682995 cri.go:89] found id: ""
	I1006 14:52:54.420226  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.420237  682995 logs.go:284] No container was found matching "etcd"
	I1006 14:52:54.420245  682995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:52:54.420299  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:52:54.445694  682995 cri.go:89] found id: ""
	I1006 14:52:54.445723  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.445733  682995 logs.go:284] No container was found matching "coredns"
	I1006 14:52:54.445740  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:52:54.445791  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:52:54.471923  682995 cri.go:89] found id: ""
	I1006 14:52:54.471954  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.471962  682995 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:52:54.471971  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:52:54.472030  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:52:54.498805  682995 cri.go:89] found id: ""
	I1006 14:52:54.498836  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.498848  682995 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:52:54.498857  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:52:54.498922  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:52:54.524613  682995 cri.go:89] found id: ""
	I1006 14:52:54.524638  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.524646  682995 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:52:54.524652  682995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:52:54.524708  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:52:54.551140  682995 cri.go:89] found id: ""
	I1006 14:52:54.551170  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.551181  682995 logs.go:284] No container was found matching "kindnet"
	I1006 14:52:54.551194  682995 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:52:54.551220  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:52:54.615573  682995 logs.go:123] Gathering logs for container status ...
	I1006 14:52:54.615607  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:52:54.645703  682995 logs.go:123] Gathering logs for kubelet ...
	I1006 14:52:54.645732  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:52:54.709506  682995 logs.go:123] Gathering logs for dmesg ...
	I1006 14:52:54.709543  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:52:54.722963  682995 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:52:54.722997  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:52:54.783016  682995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1006 14:52:54.783054  682995 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:52:54.783107  682995 out.go:285] * 
	W1006 14:52:54.783182  682995 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.783200  682995 out.go:285] * 
	W1006 14:52:54.785658  682995 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:52:54.789273  682995 out.go:203] 
	W1006 14:52:54.790573  682995 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.790604  682995 out.go:285] * 
	I1006 14:52:54.791821  682995 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246426068Z" level=info msg="createCtr: removing container c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246465998Z" level=info msg="createCtr: deleting container c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485 from storage" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.249858456Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.222474023Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=e2afb2cc-8b95-45ef-839d-d0dd5c34800d name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.22368953Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=401de123-749a-4ebc-8ab1-078ad9c73c34 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.224710592Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-481559/kube-scheduler" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.225088554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.22924992Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.229878709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.249923087Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251834213Z" level=info msg="createCtr: deleting container ID 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368 from idIndex" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251887095Z" level=info msg="createCtr: removing container 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251932195Z" level=info msg="createCtr: deleting container 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368 from storage" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.2573433Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.222451545Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d2a61b85-604a-4a78-b4a0-a6ac7419591f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.223732465Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=756cc7eb-c750-461e-be90-ed96d3fbe167 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.22488018Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-481559/kube-controller-manager" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.225141708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.228812582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.229373513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.246307916Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.247725035Z" level=info msg="createCtr: deleting container ID 5cef11bd3bd8e3ab02e1ecc608a3fdc92d76230ae854ce7d96ffba97b455d556 from idIndex" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.247768884Z" level=info msg="createCtr: removing container 5cef11bd3bd8e3ab02e1ecc608a3fdc92d76230ae854ce7d96ffba97b455d556" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.247812016Z" level=info msg="createCtr: deleting container 5cef11bd3bd8e3ab02e1ecc608a3fdc92d76230ae854ce7d96ffba97b455d556 from storage" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.249966611Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-481559_kube-system_5f3181798721fe8691d871f051785efc_0" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:54:36.155745    3913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:36.156262    3913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:36.157842    3913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:36.158237    3913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:36.159635    3913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:54:36 up  5:36,  0 user,  load average: 0.31, 0.11, 0.16
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.250344    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.861462    1985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: E1006 14:54:29.038379    1985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bee56630f6256  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,LastTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: I1006 14:54:29.038950    1985 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: E1006 14:54:29.039334    1985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.221903    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257771    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:30 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:30 ha-481559 kubelet[1985]:  > podSandboxID="28815a6c32deaa458111079bbac61f47b8e22f338f2282fab7d62077c8b07f1e"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257901    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:30 ha-481559 kubelet[1985]:         container kube-scheduler start failed in pod kube-scheduler-ha-481559_kube-system(cc93cb8d89afaa943672c70952b45174): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:30 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257947    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-481559" podUID="cc93cb8d89afaa943672c70952b45174"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.221850    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.250411    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:32 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:32 ha-481559 kubelet[1985]:  > podSandboxID="ed93c32f27ea2f50c71693ae2d5854b0e5ace377e978db1e844e55a1b66c855a"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.250537    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:32 ha-481559 kubelet[1985]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-481559_kube-system(5f3181798721fe8691d871f051785efc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:32 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.250577    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-481559" podUID="5f3181798721fe8691d871f051785efc"
	Oct 06 14:54:34 ha-481559 kubelet[1985]: E1006 14:54:34.245614    1985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	Oct 06 14:54:35 ha-481559 kubelet[1985]: E1006 14:54:35.862531    1985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 14:54:36 ha-481559 kubelet[1985]: I1006 14:54:36.040612    1985 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 14:54:36 ha-481559 kubelet[1985]: E1006 14:54:36.041041    1985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 6 (298.886768ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:36.536701  692499 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 node stop m02 --alsologtostderr -v 5: exit status 85 (89.519159ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:54:36.598904  692613 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:54:36.599011  692613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:36.599024  692613 out.go:374] Setting ErrFile to fd 2...
	I1006 14:54:36.599029  692613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:36.599265  692613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:54:36.599606  692613 mustload.go:65] Loading cluster: ha-481559
	I1006 14:54:36.599989  692613 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:54:36.601984  692613 out.go:203] 
	W1006 14:54:36.603133  692613 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1006 14:54:36.603160  692613 out.go:285] * 
	* 
	W1006 14:54:36.634342  692613 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:54:36.635985  692613 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-481559 node stop m02 --alsologtostderr -v 5": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5: exit status 6 (287.120574ms)

                                                
                                                
-- stdout --
	ha-481559
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:54:36.686527  692624 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:54:36.686752  692624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:36.686760  692624 out.go:374] Setting ErrFile to fd 2...
	I1006 14:54:36.686764  692624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:36.686951  692624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:54:36.687110  692624 out.go:368] Setting JSON to false
	I1006 14:54:36.687139  692624 mustload.go:65] Loading cluster: ha-481559
	I1006 14:54:36.687218  692624 notify.go:220] Checking for updates...
	I1006 14:54:36.687474  692624 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:54:36.687488  692624 status.go:174] checking status of ha-481559 ...
	I1006 14:54:36.687913  692624 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:54:36.706709  692624 status.go:371] ha-481559 host status = "Running" (err=<nil>)
	I1006 14:54:36.706734  692624 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:36.706991  692624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:54:36.723456  692624 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:36.723715  692624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:54:36.723768  692624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:54:36.740069  692624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:54:36.838085  692624 ssh_runner.go:195] Run: systemctl --version
	I1006 14:54:36.844359  692624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:54:36.856505  692624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:54:36.912771  692624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:54:36.903264085 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1006 14:54:36.913312  692624 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:54:36.913346  692624 api_server.go:166] Checking apiserver status ...
	I1006 14:54:36.913383  692624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 14:54:36.923656  692624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:54:36.923679  692624 status.go:463] ha-481559 apiserver status = Running (err=<nil>)
	I1006 14:54:36.923695  692624 status.go:176] ha-481559 status: &{Name:ha-481559 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:374: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 683567,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:44:39.660699919Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7effae92997970d320561b0b86c210815b18a55d65bd555e1bff50158ed38adc",
	            "SandboxKey": "/var/run/docker/netns/7effae929979",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32883"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32884"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:f3:45:3f:5b:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "b8540561692606ad815fcdb4502c1e36a16141413d3697f4cf48668502930e4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 6 (288.586676ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:37.220462  692765 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-135520 ssh pgrep buildkitd                                                                           │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image   │ functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image ls                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ delete  │ -p functional-135520                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │ 06 Oct 25 14:44 UTC │
	│ start   │ ha-481559 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- rollout status deployment/busybox                                                          │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node add --alsologtostderr -v 5                                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node stop m02 --alsologtostderr -v 5                                                                  │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:44:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:44:34.230587  682995 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:44:34.230719  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230728  682995 out.go:374] Setting ErrFile to fd 2...
	I1006 14:44:34.230733  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230969  682995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:44:34.231523  682995 out.go:368] Setting JSON to false
	I1006 14:44:34.232538  682995 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19610,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:44:34.232651  682995 start.go:140] virtualization: kvm guest
	I1006 14:44:34.235278  682995 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:44:34.236668  682995 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:44:34.236708  682995 notify.go:220] Checking for updates...
	I1006 14:44:34.239256  682995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:44:34.240475  682995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:44:34.242249  682995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:44:34.243577  682995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:44:34.244737  682995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:44:34.246267  682995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:44:34.271626  682995 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:44:34.271783  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.334697  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.323928193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.334819  682995 docker.go:318] overlay module found
	I1006 14:44:34.336770  682995 out.go:179] * Using the docker driver based on user configuration
	I1006 14:44:34.338109  682995 start.go:304] selected driver: docker
	I1006 14:44:34.338130  682995 start.go:924] validating driver "docker" against <nil>
	I1006 14:44:34.338144  682995 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:44:34.338750  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.398314  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.387376197 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.398587  682995 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:44:34.399080  682995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:44:34.401095  682995 out.go:179] * Using Docker driver with root privileges
	I1006 14:44:34.402283  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:34.402367  682995 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1006 14:44:34.402383  682995 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 14:44:34.402476  682995 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1006 14:44:34.403829  682995 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:44:34.404899  682995 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:44:34.406166  682995 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:44:34.407227  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.407272  682995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:44:34.407284  682995 cache.go:58] Caching tarball of preloaded images
	I1006 14:44:34.407376  682995 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:44:34.407382  682995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:44:34.407387  682995 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:44:34.407757  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:34.407793  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json: {Name:mkefd90ec0b9eae63c82d60bab053cdf7b5d9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:34.429193  682995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:44:34.429233  682995 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:44:34.429254  682995 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:44:34.429296  682995 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:44:34.429397  682995 start.go:364] duration metric: took 84.055µs to acquireMachinesLock for "ha-481559"
	I1006 14:44:34.429421  682995 start.go:93] Provisioning new machine with config: &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:44:34.429503  682995 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:44:34.431456  682995 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 14:44:34.431692  682995 start.go:159] libmachine.API.Create for "ha-481559" (driver="docker")
	I1006 14:44:34.431725  682995 client.go:168] LocalClient.Create starting
	I1006 14:44:34.431791  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem
	I1006 14:44:34.431825  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431843  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.431939  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem
	I1006 14:44:34.431977  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431994  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.432416  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:44:34.449965  682995 cli_runner.go:211] docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:44:34.450053  682995 network_create.go:284] running [docker network inspect ha-481559] to gather additional debugging logs...
	I1006 14:44:34.450071  682995 cli_runner.go:164] Run: docker network inspect ha-481559
	W1006 14:44:34.468682  682995 cli_runner.go:211] docker network inspect ha-481559 returned with exit code 1
	I1006 14:44:34.468713  682995 network_create.go:287] error running [docker network inspect ha-481559]: docker network inspect ha-481559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-481559 not found
	I1006 14:44:34.468724  682995 network_create.go:289] output of [docker network inspect ha-481559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-481559 not found
	
	** /stderr **
	I1006 14:44:34.468902  682995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:34.488223  682995 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca2540}
	I1006 14:44:34.488276  682995 network_create.go:124] attempt to create docker network ha-481559 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:44:34.488338  682995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-481559 ha-481559
	I1006 14:44:34.548630  682995 network_create.go:108] docker network ha-481559 192.168.49.0/24 created
	I1006 14:44:34.548669  682995 kic.go:121] calculated static IP "192.168.49.2" for the "ha-481559" container
	I1006 14:44:34.548729  682995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:44:34.566959  682995 cli_runner.go:164] Run: docker volume create ha-481559 --label name.minikube.sigs.k8s.io=ha-481559 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:44:34.586001  682995 oci.go:103] Successfully created a docker volume ha-481559
	I1006 14:44:34.586088  682995 cli_runner.go:164] Run: docker run --rm --name ha-481559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --entrypoint /usr/bin/test -v ha-481559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:44:34.994169  682995 oci.go:107] Successfully prepared a docker volume ha-481559
	I1006 14:44:34.994233  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.994280  682995 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:44:34.994349  682995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:44:39.551248  682995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.556814521s)
	I1006 14:44:39.551287  682995 kic.go:203] duration metric: took 4.557022471s to extract preloaded images to volume ...
	W1006 14:44:39.551374  682995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1006 14:44:39.551406  682995 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1006 14:44:39.551451  682995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:44:39.608040  682995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-481559 --name ha-481559 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-481559 --network ha-481559 --ip 192.168.49.2 --volume ha-481559:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:44:39.865946  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Running}}
	I1006 14:44:39.883061  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:39.901066  682995 cli_runner.go:164] Run: docker exec ha-481559 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:44:39.951869  682995 oci.go:144] the created container "ha-481559" has a running status.
	I1006 14:44:39.951908  682995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa...
	I1006 14:44:40.176341  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 14:44:40.176392  682995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:44:40.205643  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.227924  682995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:44:40.227948  682995 kic_runner.go:114] Args: [docker exec --privileged ha-481559 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:44:40.277808  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.297063  682995 machine.go:93] provisionDockerMachine start ...
	I1006 14:44:40.297156  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.315828  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.316109  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.316124  682995 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:44:40.461735  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.461771  682995 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:44:40.461843  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.481222  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.481551  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.481575  682995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:44:40.636624  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.636709  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.655017  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.655283  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.655302  682995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:44:40.801276  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:44:40.801313  682995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:44:40.801332  682995 ubuntu.go:190] setting up certificates
	I1006 14:44:40.801344  682995 provision.go:84] configureAuth start
	I1006 14:44:40.801398  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:40.819000  682995 provision.go:143] copyHostCerts
	I1006 14:44:40.819052  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819089  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:44:40.819099  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819169  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:44:40.819281  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819304  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:44:40.819309  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819338  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:44:40.819400  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819416  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:44:40.819428  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819460  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:44:40.819525  682995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:44:40.896257  682995 provision.go:177] copyRemoteCerts
	I1006 14:44:40.896328  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:44:40.896370  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.914092  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.016898  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:44:41.016969  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:44:41.037131  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:44:41.037215  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:44:41.055180  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:44:41.055258  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:44:41.073045  682995 provision.go:87] duration metric: took 271.684433ms to configureAuth
	I1006 14:44:41.073074  682995 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:44:41.073312  682995 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:44:41.073456  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.092548  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:41.092838  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:41.092869  682995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:44:41.356221  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:44:41.356247  682995 machine.go:96] duration metric: took 1.059160507s to provisionDockerMachine
	I1006 14:44:41.356259  682995 client.go:171] duration metric: took 6.924524382s to LocalClient.Create
	I1006 14:44:41.356282  682995 start.go:167] duration metric: took 6.924591304s to libmachine.API.Create "ha-481559"
	I1006 14:44:41.356295  682995 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:44:41.356322  682995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:44:41.356396  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:44:41.356453  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.374424  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.479545  682995 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:44:41.483318  682995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:44:41.483345  682995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:44:41.483356  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:44:41.483402  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:44:41.483499  682995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:44:41.483510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:44:41.483603  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:44:41.491409  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:41.511609  682995 start.go:296] duration metric: took 155.29938ms for postStartSetup
	I1006 14:44:41.511914  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.529867  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:41.530158  682995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:44:41.530223  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.547995  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.647810  682995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:44:41.652637  682995 start.go:128] duration metric: took 7.223117194s to createHost
	I1006 14:44:41.652662  682995 start.go:83] releasing machines lock for "ha-481559", held for 7.223254897s
	I1006 14:44:41.652730  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.670486  682995 ssh_runner.go:195] Run: cat /version.json
	I1006 14:44:41.670511  682995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:44:41.670555  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.670581  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.689278  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.689801  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.845142  682995 ssh_runner.go:195] Run: systemctl --version
	I1006 14:44:41.852333  682995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:44:41.886799  682995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:44:41.891575  682995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:44:41.891645  682995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:44:41.918020  682995 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 14:44:41.918049  682995 start.go:495] detecting cgroup driver to use...
	I1006 14:44:41.918088  682995 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:44:41.918148  682995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:44:41.934827  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:44:41.946573  682995 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:44:41.946626  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:44:41.961811  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:44:41.978333  682995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:44:42.056893  682995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:44:42.140645  682995 docker.go:234] disabling docker service ...
	I1006 14:44:42.140713  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:44:42.159372  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:44:42.171857  682995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:44:42.255908  682995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:44:42.340081  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:44:42.352916  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:44:42.367142  682995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:44:42.367215  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.377866  682995 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:44:42.377939  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.387157  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.395944  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.404768  682995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:44:42.412712  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.420910  682995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.434108  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.442895  682995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:44:42.450289  682995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:44:42.457667  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:42.535385  682995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:44:42.643348  682995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:44:42.643424  682995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:44:42.647404  682995 start.go:563] Will wait 60s for crictl version
	I1006 14:44:42.647467  682995 ssh_runner.go:195] Run: which crictl
	I1006 14:44:42.651000  682995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:44:42.675962  682995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:44:42.676044  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.705541  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.736773  682995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:44:42.738090  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:42.754892  682995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:44:42.759274  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.770415  682995 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:44:42.770534  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:42.770581  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.805187  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.805221  682995 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:44:42.805274  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.831096  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.831123  682995 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:44:42.831132  682995 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:44:42.831244  682995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:44:42.831321  682995 ssh_runner.go:195] Run: crio config
	I1006 14:44:42.877768  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:42.877790  682995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:44:42.877819  682995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:44:42.877840  682995 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:44:42.877966  682995 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:44:42.877993  682995 kube-vip.go:115] generating kube-vip config ...
	I1006 14:44:42.878035  682995 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1006 14:44:42.890886  682995 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:44:42.890995  682995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1006 14:44:42.891046  682995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:44:42.899063  682995 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:44:42.899132  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1006 14:44:42.906926  682995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:44:42.919358  682995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:44:42.934141  682995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:44:42.945961  682995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1006 14:44:42.959489  682995 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1006 14:44:42.962953  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.972760  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:43.053996  682995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:44:43.077665  682995 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:44:43.077692  682995 certs.go:195] generating shared ca certs ...
	I1006 14:44:43.077714  682995 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.077856  682995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:44:43.077899  682995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:44:43.077909  682995 certs.go:257] generating profile certs ...
	I1006 14:44:43.077963  682995 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:44:43.077983  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt with IP's: []
	I1006 14:44:43.259387  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt ...
	I1006 14:44:43.259418  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt: {Name:mk058803c7a7f0f2aa3fb547a3aafbba9518c3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.259607  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key ...
	I1006 14:44:43.259619  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key: {Name:mk0ae3492597f7c1edf0d7262770452fa244a40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.265151  682995 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710
	I1006 14:44:43.265175  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1006 14:44:43.807062  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 ...
	I1006 14:44:43.807095  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710: {Name:mk30dd14f07a4b732bb60853cc2fd5f84f73e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807283  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 ...
	I1006 14:44:43.807298  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710: {Name:mkf3f5fbdf7957143c03cb611320a2e02acb94c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807374  682995 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:44:43.807489  682995 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:44:43.807558  682995 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:44:43.807574  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt with IP's: []
	I1006 14:44:43.994115  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt ...
	I1006 14:44:43.994149  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt: {Name:mk715c6902e25626016d7eb8fdb7b52f0fdce895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994338  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key ...
	I1006 14:44:43.994350  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key: {Name:mka438ddf42b96ca34511dda1ce60f08f1d48b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994429  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:44:43.994449  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:44:43.994460  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:44:43.994470  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:44:43.994480  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:44:43.994490  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:44:43.994510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:44:43.994522  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:44:43.994570  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:44:43.994617  682995 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:44:43.994630  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:44:43.994653  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:44:43.994674  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:44:43.994701  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:44:43.994739  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:43.994772  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:44:43.994786  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:43.994798  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:44:43.995423  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:44:44.014422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:44:44.032422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:44:44.050727  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:44:44.068490  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:44:44.085540  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:44:44.102941  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:44:44.121043  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:44:44.139583  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:44:44.159654  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:44:44.176939  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:44:44.194332  682995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:44:44.207641  682995 ssh_runner.go:195] Run: openssl version
	I1006 14:44:44.214349  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:44:44.223426  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227339  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227401  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.261578  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:44:44.270472  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:44:44.279083  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282749  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282813  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.316484  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:44:44.325228  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:44:44.334098  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.337988  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.338051  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.371914  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:44:44.380847  682995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:44:44.384643  682995 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:44:44.384694  682995 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:44:44.384758  682995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:44:44.384823  682995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:44:44.413083  682995 cri.go:89] found id: ""
	I1006 14:44:44.413145  682995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:44:44.421446  682995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:44:44.429380  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:44:44.429431  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:44:44.437643  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:44:44.437667  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:44:44.437726  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:44:44.445948  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:44:44.446021  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:44:44.453451  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:44:44.460986  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:44:44.461064  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:44:44.468259  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.475830  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:44:44.475882  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.483080  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:44:44.490569  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:44:44.490632  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:44:44.498056  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:44:44.560210  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:44:44.618315  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:48:49.762009  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:48:49.762136  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:48:49.765019  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:49.765065  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:49.765142  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:49.765192  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:49.765263  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:49.765329  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:49.765384  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:49.765424  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:49.765465  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:49.765507  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:49.765557  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:49.765644  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:49.765713  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:49.765816  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:49.765897  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:49.765974  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:49.766033  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:49.768189  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:49.768304  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:49.768391  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:49.768495  682995 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:48:49.768546  682995 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:48:49.768600  682995 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:48:49.768641  682995 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:48:49.768684  682995 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:48:49.768778  682995 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.768847  682995 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:48:49.768982  682995 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.769042  682995 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:48:49.769108  682995 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:48:49.769166  682995 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:48:49.769263  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:49.769339  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:49.769416  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:49.769489  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:49.769549  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:49.769601  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:49.769671  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:49.769753  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:49.771489  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:49.771577  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:49.771664  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:49.771742  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:49.771858  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:49.771974  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:49.772108  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:49.772220  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:49.772288  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:49.772413  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:49.772556  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:49.772647  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501252368s
	I1006 14:48:49.772772  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:49.772891  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:49.772971  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:49.773033  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:48:49.773108  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	I1006 14:48:49.773189  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	I1006 14:48:49.773304  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	I1006 14:48:49.773319  682995 kubeadm.go:318] 
	I1006 14:48:49.773407  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:48:49.773472  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:48:49.773545  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:48:49.773657  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:48:49.773771  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:48:49.773850  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:48:49.773891  682995 kubeadm.go:318] 
	W1006 14:48:49.774048  682995 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501252368s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:48:49.774147  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:48:52.524900  682995 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.75072398s)
	I1006 14:48:52.524985  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:48:52.538104  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:48:52.538173  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:48:52.546610  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:48:52.546639  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:48:52.546692  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:48:52.555271  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:48:52.555334  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:48:52.564502  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:48:52.572861  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:48:52.572925  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:48:52.580681  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.588574  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:48:52.588636  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.596314  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:48:52.604007  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:48:52.604073  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:48:52.611967  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:48:52.650794  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:52.650844  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:52.671446  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:52.671559  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:52.671628  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:52.671718  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:52.671766  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:52.671811  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:52.671850  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:52.671890  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:52.671928  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:52.671972  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:52.672010  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:52.732758  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:52.732876  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:52.732979  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:52.739914  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:52.743428  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:52.743535  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:52.743654  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:52.743727  682995 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:48:52.743777  682995 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:48:52.743861  682995 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:48:52.743911  682995 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:48:52.743985  682995 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:48:52.744055  682995 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:48:52.744143  682995 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:48:52.744228  682995 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:48:52.744266  682995 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:48:52.744323  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:53.107297  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:53.300701  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:53.503166  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:53.664024  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:53.725865  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:53.726293  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:53.728797  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:53.730586  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:53.730720  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:53.730830  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:53.730903  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:53.744534  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:53.744672  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:53.752267  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:53.752422  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:53.752505  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:53.852049  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:53.852226  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:54.353729  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.825241ms
	I1006 14:48:54.356542  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:54.356619  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:54.356695  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:54.356819  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:52:54.358331  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	I1006 14:52:54.358653  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	I1006 14:52:54.358853  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	I1006 14:52:54.358881  682995 kubeadm.go:318] 
	I1006 14:52:54.359059  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:52:54.359298  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:52:54.359539  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:52:54.359760  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:52:54.359952  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:52:54.360116  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:52:54.360148  682995 kubeadm.go:318] 
	I1006 14:52:54.363033  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:52:54.363163  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:52:54.363696  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:52:54.363761  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:52:54.363858  682995 kubeadm.go:402] duration metric: took 8m9.979166519s to StartCluster
	I1006 14:52:54.363946  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:52:54.364031  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:52:54.392579  682995 cri.go:89] found id: ""
	I1006 14:52:54.392622  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.392631  682995 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:52:54.392638  682995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:52:54.392693  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:52:54.420188  682995 cri.go:89] found id: ""
	I1006 14:52:54.420226  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.420237  682995 logs.go:284] No container was found matching "etcd"
	I1006 14:52:54.420245  682995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:52:54.420299  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:52:54.445694  682995 cri.go:89] found id: ""
	I1006 14:52:54.445723  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.445733  682995 logs.go:284] No container was found matching "coredns"
	I1006 14:52:54.445740  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:52:54.445791  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:52:54.471923  682995 cri.go:89] found id: ""
	I1006 14:52:54.471954  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.471962  682995 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:52:54.471971  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:52:54.472030  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:52:54.498805  682995 cri.go:89] found id: ""
	I1006 14:52:54.498836  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.498848  682995 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:52:54.498857  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:52:54.498922  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:52:54.524613  682995 cri.go:89] found id: ""
	I1006 14:52:54.524638  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.524646  682995 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:52:54.524652  682995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:52:54.524708  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:52:54.551140  682995 cri.go:89] found id: ""
	I1006 14:52:54.551170  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.551181  682995 logs.go:284] No container was found matching "kindnet"
	I1006 14:52:54.551194  682995 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:52:54.551220  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:52:54.615573  682995 logs.go:123] Gathering logs for container status ...
	I1006 14:52:54.615607  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:52:54.645703  682995 logs.go:123] Gathering logs for kubelet ...
	I1006 14:52:54.645732  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:52:54.709506  682995 logs.go:123] Gathering logs for dmesg ...
	I1006 14:52:54.709543  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:52:54.722963  682995 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:52:54.722997  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:52:54.783016  682995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1006 14:52:54.783054  682995 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:52:54.783107  682995 out.go:285] * 
	W1006 14:52:54.783182  682995 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.783200  682995 out.go:285] * 
	W1006 14:52:54.785658  682995 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:52:54.789273  682995 out.go:203] 
	W1006 14:52:54.790573  682995 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.790604  682995 out.go:285] * 
	I1006 14:52:54.791821  682995 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246426068Z" level=info msg="createCtr: removing container c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.246465998Z" level=info msg="createCtr: deleting container c1376676dafaf7b4d10a72a589a3ae2d56ecf790744e031ae536ebf8175e4485 from storage" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:28 ha-481559 crio[777]: time="2025-10-06T14:54:28.249858456Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=ecbc1c2a-3ed9-4452-81c8-a6b6b312f34f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.222474023Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=e2afb2cc-8b95-45ef-839d-d0dd5c34800d name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.22368953Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=401de123-749a-4ebc-8ab1-078ad9c73c34 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.224710592Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-481559/kube-scheduler" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.225088554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.22924992Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.229878709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.249923087Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251834213Z" level=info msg="createCtr: deleting container ID 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368 from idIndex" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251887095Z" level=info msg="createCtr: removing container 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251932195Z" level=info msg="createCtr: deleting container 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368 from storage" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.2573433Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.222451545Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d2a61b85-604a-4a78-b4a0-a6ac7419591f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.223732465Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=756cc7eb-c750-461e-be90-ed96d3fbe167 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.22488018Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-481559/kube-controller-manager" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.225141708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.228812582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.229373513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.246307916Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.247725035Z" level=info msg="createCtr: deleting container ID 5cef11bd3bd8e3ab02e1ecc608a3fdc92d76230ae854ce7d96ffba97b455d556 from idIndex" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.247768884Z" level=info msg="createCtr: removing container 5cef11bd3bd8e3ab02e1ecc608a3fdc92d76230ae854ce7d96ffba97b455d556" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.247812016Z" level=info msg="createCtr: deleting container 5cef11bd3bd8e3ab02e1ecc608a3fdc92d76230ae854ce7d96ffba97b455d556 from storage" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.249966611Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-481559_kube-system_5f3181798721fe8691d871f051785efc_0" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:54:37.788430    4087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:37.789010    4087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:37.790605    4087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:37.791034    4087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:37.792596    4087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:54:37 up  5:36,  0 user,  load average: 0.31, 0.11, 0.16
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.250344    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	Oct 06 14:54:28 ha-481559 kubelet[1985]: E1006 14:54:28.861462    1985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: E1006 14:54:29.038379    1985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bee56630f6256  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,LastTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: I1006 14:54:29.038950    1985 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 14:54:29 ha-481559 kubelet[1985]: E1006 14:54:29.039334    1985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.221903    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257771    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:30 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:30 ha-481559 kubelet[1985]:  > podSandboxID="28815a6c32deaa458111079bbac61f47b8e22f338f2282fab7d62077c8b07f1e"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257901    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:30 ha-481559 kubelet[1985]:         container kube-scheduler start failed in pod kube-scheduler-ha-481559_kube-system(cc93cb8d89afaa943672c70952b45174): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:30 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257947    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-481559" podUID="cc93cb8d89afaa943672c70952b45174"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.221850    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.250411    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:32 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:32 ha-481559 kubelet[1985]:  > podSandboxID="ed93c32f27ea2f50c71693ae2d5854b0e5ace377e978db1e844e55a1b66c855a"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.250537    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:32 ha-481559 kubelet[1985]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-481559_kube-system(5f3181798721fe8691d871f051785efc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:32 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.250577    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-481559" podUID="5f3181798721fe8691d871f051785efc"
	Oct 06 14:54:34 ha-481559 kubelet[1985]: E1006 14:54:34.245614    1985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	Oct 06 14:54:35 ha-481559 kubelet[1985]: E1006 14:54:35.862531    1985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 14:54:36 ha-481559 kubelet[1985]: I1006 14:54:36.040612    1985 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 14:54:36 ha-481559 kubelet[1985]: E1006 14:54:36.041041    1985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 6 (294.287404ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:38.158048  693091 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-481559" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-481559\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-481559\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-481559\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 683567,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:44:39.660699919Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7effae92997970d320561b0b86c210815b18a55d65bd555e1bff50158ed38adc",
	            "SandboxKey": "/var/run/docker/netns/7effae929979",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32883"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32884"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:f3:45:3f:5b:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "b8540561692606ad815fcdb4502c1e36a16141413d3697f4cf48668502930e4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 6 (290.264991ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:38.781544  693341 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-135520 ssh pgrep buildkitd                                                                           │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │                     │
	│ image   │ functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image ls                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ delete  │ -p functional-135520                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │ 06 Oct 25 14:44 UTC │
	│ start   │ ha-481559 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- rollout status deployment/busybox                                                          │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node add --alsologtostderr -v 5                                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node stop m02 --alsologtostderr -v 5                                                                  │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:44:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:44:34.230587  682995 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:44:34.230719  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230728  682995 out.go:374] Setting ErrFile to fd 2...
	I1006 14:44:34.230733  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230969  682995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:44:34.231523  682995 out.go:368] Setting JSON to false
	I1006 14:44:34.232538  682995 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19610,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:44:34.232651  682995 start.go:140] virtualization: kvm guest
	I1006 14:44:34.235278  682995 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:44:34.236668  682995 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:44:34.236708  682995 notify.go:220] Checking for updates...
	I1006 14:44:34.239256  682995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:44:34.240475  682995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:44:34.242249  682995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:44:34.243577  682995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:44:34.244737  682995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:44:34.246267  682995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:44:34.271626  682995 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:44:34.271783  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.334697  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.323928193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.334819  682995 docker.go:318] overlay module found
	I1006 14:44:34.336770  682995 out.go:179] * Using the docker driver based on user configuration
	I1006 14:44:34.338109  682995 start.go:304] selected driver: docker
	I1006 14:44:34.338130  682995 start.go:924] validating driver "docker" against <nil>
	I1006 14:44:34.338144  682995 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:44:34.338750  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.398314  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.387376197 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.398587  682995 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:44:34.399080  682995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:44:34.401095  682995 out.go:179] * Using Docker driver with root privileges
	I1006 14:44:34.402283  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:34.402367  682995 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1006 14:44:34.402383  682995 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 14:44:34.402476  682995 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1006 14:44:34.403829  682995 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:44:34.404899  682995 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:44:34.406166  682995 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:44:34.407227  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.407272  682995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:44:34.407284  682995 cache.go:58] Caching tarball of preloaded images
	I1006 14:44:34.407376  682995 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:44:34.407382  682995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:44:34.407387  682995 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:44:34.407757  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:34.407793  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json: {Name:mkefd90ec0b9eae63c82d60bab053cdf7b5d9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:34.429193  682995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:44:34.429233  682995 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:44:34.429254  682995 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:44:34.429296  682995 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:44:34.429397  682995 start.go:364] duration metric: took 84.055µs to acquireMachinesLock for "ha-481559"
	I1006 14:44:34.429421  682995 start.go:93] Provisioning new machine with config: &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:44:34.429503  682995 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:44:34.431456  682995 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 14:44:34.431692  682995 start.go:159] libmachine.API.Create for "ha-481559" (driver="docker")
	I1006 14:44:34.431725  682995 client.go:168] LocalClient.Create starting
	I1006 14:44:34.431791  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem
	I1006 14:44:34.431825  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431843  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.431939  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem
	I1006 14:44:34.431977  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431994  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.432416  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:44:34.449965  682995 cli_runner.go:211] docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:44:34.450053  682995 network_create.go:284] running [docker network inspect ha-481559] to gather additional debugging logs...
	I1006 14:44:34.450071  682995 cli_runner.go:164] Run: docker network inspect ha-481559
	W1006 14:44:34.468682  682995 cli_runner.go:211] docker network inspect ha-481559 returned with exit code 1
	I1006 14:44:34.468713  682995 network_create.go:287] error running [docker network inspect ha-481559]: docker network inspect ha-481559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-481559 not found
	I1006 14:44:34.468724  682995 network_create.go:289] output of [docker network inspect ha-481559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-481559 not found
	
	** /stderr **
	I1006 14:44:34.468902  682995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:34.488223  682995 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca2540}
	I1006 14:44:34.488276  682995 network_create.go:124] attempt to create docker network ha-481559 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:44:34.488338  682995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-481559 ha-481559
	I1006 14:44:34.548630  682995 network_create.go:108] docker network ha-481559 192.168.49.0/24 created
	I1006 14:44:34.548669  682995 kic.go:121] calculated static IP "192.168.49.2" for the "ha-481559" container
	I1006 14:44:34.548729  682995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:44:34.566959  682995 cli_runner.go:164] Run: docker volume create ha-481559 --label name.minikube.sigs.k8s.io=ha-481559 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:44:34.586001  682995 oci.go:103] Successfully created a docker volume ha-481559
	I1006 14:44:34.586088  682995 cli_runner.go:164] Run: docker run --rm --name ha-481559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --entrypoint /usr/bin/test -v ha-481559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:44:34.994169  682995 oci.go:107] Successfully prepared a docker volume ha-481559
	I1006 14:44:34.994233  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.994280  682995 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:44:34.994349  682995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:44:39.551248  682995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.556814521s)
	I1006 14:44:39.551287  682995 kic.go:203] duration metric: took 4.557022471s to extract preloaded images to volume ...
	W1006 14:44:39.551374  682995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1006 14:44:39.551406  682995 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1006 14:44:39.551451  682995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:44:39.608040  682995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-481559 --name ha-481559 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-481559 --network ha-481559 --ip 192.168.49.2 --volume ha-481559:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:44:39.865946  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Running}}
	I1006 14:44:39.883061  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:39.901066  682995 cli_runner.go:164] Run: docker exec ha-481559 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:44:39.951869  682995 oci.go:144] the created container "ha-481559" has a running status.
	I1006 14:44:39.951908  682995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa...
	I1006 14:44:40.176341  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 14:44:40.176392  682995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:44:40.205643  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.227924  682995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:44:40.227948  682995 kic_runner.go:114] Args: [docker exec --privileged ha-481559 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:44:40.277808  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.297063  682995 machine.go:93] provisionDockerMachine start ...
	I1006 14:44:40.297156  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.315828  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.316109  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.316124  682995 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:44:40.461735  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.461771  682995 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:44:40.461843  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.481222  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.481551  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.481575  682995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:44:40.636624  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.636709  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.655017  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.655283  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.655302  682995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:44:40.801276  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:44:40.801313  682995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:44:40.801332  682995 ubuntu.go:190] setting up certificates
	I1006 14:44:40.801344  682995 provision.go:84] configureAuth start
	I1006 14:44:40.801398  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:40.819000  682995 provision.go:143] copyHostCerts
	I1006 14:44:40.819052  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819089  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:44:40.819099  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819169  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:44:40.819281  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819304  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:44:40.819309  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819338  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:44:40.819400  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819416  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:44:40.819428  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819460  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:44:40.819525  682995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:44:40.896257  682995 provision.go:177] copyRemoteCerts
	I1006 14:44:40.896328  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:44:40.896370  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.914092  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.016898  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:44:41.016969  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:44:41.037131  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:44:41.037215  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:44:41.055180  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:44:41.055258  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:44:41.073045  682995 provision.go:87] duration metric: took 271.684433ms to configureAuth
	I1006 14:44:41.073074  682995 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:44:41.073312  682995 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:44:41.073456  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.092548  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:41.092838  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:41.092869  682995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:44:41.356221  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:44:41.356247  682995 machine.go:96] duration metric: took 1.059160507s to provisionDockerMachine
	I1006 14:44:41.356259  682995 client.go:171] duration metric: took 6.924524382s to LocalClient.Create
	I1006 14:44:41.356282  682995 start.go:167] duration metric: took 6.924591304s to libmachine.API.Create "ha-481559"
	I1006 14:44:41.356295  682995 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:44:41.356322  682995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:44:41.356396  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:44:41.356453  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.374424  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.479545  682995 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:44:41.483318  682995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:44:41.483345  682995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:44:41.483356  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:44:41.483402  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:44:41.483499  682995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:44:41.483510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:44:41.483603  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:44:41.491409  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:41.511609  682995 start.go:296] duration metric: took 155.29938ms for postStartSetup
	I1006 14:44:41.511914  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.529867  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:41.530158  682995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:44:41.530223  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.547995  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.647810  682995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:44:41.652637  682995 start.go:128] duration metric: took 7.223117194s to createHost
	I1006 14:44:41.652662  682995 start.go:83] releasing machines lock for "ha-481559", held for 7.223254897s
	I1006 14:44:41.652730  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.670486  682995 ssh_runner.go:195] Run: cat /version.json
	I1006 14:44:41.670511  682995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:44:41.670555  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.670581  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.689278  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.689801  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.845142  682995 ssh_runner.go:195] Run: systemctl --version
	I1006 14:44:41.852333  682995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:44:41.886799  682995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:44:41.891575  682995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:44:41.891645  682995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:44:41.918020  682995 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 14:44:41.918049  682995 start.go:495] detecting cgroup driver to use...
	I1006 14:44:41.918088  682995 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:44:41.918148  682995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:44:41.934827  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:44:41.946573  682995 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:44:41.946626  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:44:41.961811  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:44:41.978333  682995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:44:42.056893  682995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:44:42.140645  682995 docker.go:234] disabling docker service ...
	I1006 14:44:42.140713  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:44:42.159372  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:44:42.171857  682995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:44:42.255908  682995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:44:42.340081  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:44:42.352916  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:44:42.367142  682995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:44:42.367215  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.377866  682995 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:44:42.377939  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.387157  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.395944  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.404768  682995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:44:42.412712  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.420910  682995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.434108  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.442895  682995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:44:42.450289  682995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:44:42.457667  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:42.535385  682995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:44:42.643348  682995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:44:42.643424  682995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:44:42.647404  682995 start.go:563] Will wait 60s for crictl version
	I1006 14:44:42.647467  682995 ssh_runner.go:195] Run: which crictl
	I1006 14:44:42.651000  682995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:44:42.675962  682995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:44:42.676044  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.705541  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.736773  682995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:44:42.738090  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:42.754892  682995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:44:42.759274  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.770415  682995 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:44:42.770534  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:42.770581  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.805187  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.805221  682995 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:44:42.805274  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.831096  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.831123  682995 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:44:42.831132  682995 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:44:42.831244  682995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:44:42.831321  682995 ssh_runner.go:195] Run: crio config
	I1006 14:44:42.877768  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:42.877790  682995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:44:42.877819  682995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:44:42.877840  682995 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:44:42.877966  682995 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:44:42.877993  682995 kube-vip.go:115] generating kube-vip config ...
	I1006 14:44:42.878035  682995 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1006 14:44:42.890886  682995 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:44:42.890995  682995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1006 14:44:42.891046  682995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:44:42.899063  682995 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:44:42.899132  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1006 14:44:42.906926  682995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:44:42.919358  682995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:44:42.934141  682995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:44:42.945961  682995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1006 14:44:42.959489  682995 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1006 14:44:42.962953  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.972760  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:43.053996  682995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:44:43.077665  682995 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:44:43.077692  682995 certs.go:195] generating shared ca certs ...
	I1006 14:44:43.077714  682995 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.077856  682995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:44:43.077899  682995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:44:43.077909  682995 certs.go:257] generating profile certs ...
	I1006 14:44:43.077963  682995 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:44:43.077983  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt with IP's: []
	I1006 14:44:43.259387  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt ...
	I1006 14:44:43.259418  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt: {Name:mk058803c7a7f0f2aa3fb547a3aafbba9518c3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.259607  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key ...
	I1006 14:44:43.259619  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key: {Name:mk0ae3492597f7c1edf0d7262770452fa244a40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.265151  682995 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710
	I1006 14:44:43.265175  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1006 14:44:43.807062  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 ...
	I1006 14:44:43.807095  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710: {Name:mk30dd14f07a4b732bb60853cc2fd5f84f73e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807283  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 ...
	I1006 14:44:43.807298  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710: {Name:mkf3f5fbdf7957143c03cb611320a2e02acb94c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807374  682995 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:44:43.807489  682995 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:44:43.807558  682995 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:44:43.807574  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt with IP's: []
	I1006 14:44:43.994115  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt ...
	I1006 14:44:43.994149  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt: {Name:mk715c6902e25626016d7eb8fdb7b52f0fdce895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994338  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key ...
	I1006 14:44:43.994350  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key: {Name:mka438ddf42b96ca34511dda1ce60f08f1d48b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994429  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:44:43.994449  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:44:43.994460  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:44:43.994470  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:44:43.994480  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:44:43.994490  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:44:43.994510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:44:43.994522  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:44:43.994570  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:44:43.994617  682995 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:44:43.994630  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:44:43.994653  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:44:43.994674  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:44:43.994701  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:44:43.994739  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:43.994772  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:44:43.994786  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:43.994798  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:44:43.995423  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:44:44.014422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:44:44.032422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:44:44.050727  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:44:44.068490  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:44:44.085540  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:44:44.102941  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:44:44.121043  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:44:44.139583  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:44:44.159654  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:44:44.176939  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:44:44.194332  682995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:44:44.207641  682995 ssh_runner.go:195] Run: openssl version
	I1006 14:44:44.214349  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:44:44.223426  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227339  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227401  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.261578  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:44:44.270472  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:44:44.279083  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282749  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282813  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.316484  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:44:44.325228  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:44:44.334098  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.337988  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.338051  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.371914  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:44:44.380847  682995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:44:44.384643  682995 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:44:44.384694  682995 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:44:44.384758  682995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:44:44.384823  682995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:44:44.413083  682995 cri.go:89] found id: ""
	I1006 14:44:44.413145  682995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:44:44.421446  682995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:44:44.429380  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:44:44.429431  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:44:44.437643  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:44:44.437667  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:44:44.437726  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:44:44.445948  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:44:44.446021  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:44:44.453451  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:44:44.460986  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:44:44.461064  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:44:44.468259  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.475830  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:44:44.475882  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.483080  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:44:44.490569  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:44:44.490632  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:44:44.498056  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:44:44.560210  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:44:44.618315  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:48:49.762009  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:48:49.762136  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:48:49.765019  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:49.765065  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:49.765142  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:49.765192  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:49.765263  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:49.765329  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:49.765384  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:49.765424  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:49.765465  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:49.765507  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:49.765557  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:49.765644  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:49.765713  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:49.765816  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:49.765897  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:49.765974  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:49.766033  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:49.768189  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:49.768304  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:49.768391  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:49.768495  682995 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:48:49.768546  682995 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:48:49.768600  682995 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:48:49.768641  682995 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:48:49.768684  682995 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:48:49.768778  682995 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.768847  682995 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:48:49.768982  682995 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.769042  682995 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:48:49.769108  682995 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:48:49.769166  682995 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:48:49.769263  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:49.769339  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:49.769416  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:49.769489  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:49.769549  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:49.769601  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:49.769671  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:49.769753  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:49.771489  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:49.771577  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:49.771664  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:49.771742  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:49.771858  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:49.771974  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:49.772108  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:49.772220  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:49.772288  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:49.772413  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:49.772556  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:49.772647  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501252368s
	I1006 14:48:49.772772  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:49.772891  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:49.772971  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:49.773033  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:48:49.773108  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	I1006 14:48:49.773189  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	I1006 14:48:49.773304  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	I1006 14:48:49.773319  682995 kubeadm.go:318] 
	I1006 14:48:49.773407  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:48:49.773472  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:48:49.773545  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:48:49.773657  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:48:49.773771  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:48:49.773850  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:48:49.773891  682995 kubeadm.go:318] 
	W1006 14:48:49.774048  682995 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501252368s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:48:49.774147  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:48:52.524900  682995 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.75072398s)
	I1006 14:48:52.524985  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:48:52.538104  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:48:52.538173  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:48:52.546610  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:48:52.546639  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:48:52.546692  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:48:52.555271  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:48:52.555334  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:48:52.564502  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:48:52.572861  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:48:52.572925  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:48:52.580681  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.588574  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:48:52.588636  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.596314  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:48:52.604007  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:48:52.604073  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:48:52.611967  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:48:52.650794  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:52.650844  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:52.671446  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:52.671559  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:52.671628  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:52.671718  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:52.671766  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:52.671811  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:52.671850  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:52.671890  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:52.671928  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:52.671972  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:52.672010  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:52.732758  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:52.732876  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:52.732979  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:52.739914  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:52.743428  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:52.743535  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:52.743654  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:52.743727  682995 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:48:52.743777  682995 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:48:52.743861  682995 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:48:52.743911  682995 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:48:52.743985  682995 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:48:52.744055  682995 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:48:52.744143  682995 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:48:52.744228  682995 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:48:52.744266  682995 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:48:52.744323  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:53.107297  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:53.300701  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:53.503166  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:53.664024  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:53.725865  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:53.726293  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:53.728797  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:53.730586  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:53.730720  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:53.730830  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:53.730903  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:53.744534  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:53.744672  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:53.752267  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:53.752422  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:53.752505  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:53.852049  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:53.852226  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:54.353729  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.825241ms
	I1006 14:48:54.356542  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:54.356619  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:54.356695  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:54.356819  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:52:54.358331  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	I1006 14:52:54.358653  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	I1006 14:52:54.358853  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	I1006 14:52:54.358881  682995 kubeadm.go:318] 
	I1006 14:52:54.359059  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:52:54.359298  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:52:54.359539  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:52:54.359760  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:52:54.359952  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:52:54.360116  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:52:54.360148  682995 kubeadm.go:318] 
	I1006 14:52:54.363033  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:52:54.363163  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:52:54.363696  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:52:54.363761  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:52:54.363858  682995 kubeadm.go:402] duration metric: took 8m9.979166519s to StartCluster
	I1006 14:52:54.363946  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:52:54.364031  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:52:54.392579  682995 cri.go:89] found id: ""
	I1006 14:52:54.392622  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.392631  682995 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:52:54.392638  682995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:52:54.392693  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:52:54.420188  682995 cri.go:89] found id: ""
	I1006 14:52:54.420226  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.420237  682995 logs.go:284] No container was found matching "etcd"
	I1006 14:52:54.420245  682995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:52:54.420299  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:52:54.445694  682995 cri.go:89] found id: ""
	I1006 14:52:54.445723  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.445733  682995 logs.go:284] No container was found matching "coredns"
	I1006 14:52:54.445740  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:52:54.445791  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:52:54.471923  682995 cri.go:89] found id: ""
	I1006 14:52:54.471954  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.471962  682995 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:52:54.471971  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:52:54.472030  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:52:54.498805  682995 cri.go:89] found id: ""
	I1006 14:52:54.498836  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.498848  682995 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:52:54.498857  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:52:54.498922  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:52:54.524613  682995 cri.go:89] found id: ""
	I1006 14:52:54.524638  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.524646  682995 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:52:54.524652  682995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:52:54.524708  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:52:54.551140  682995 cri.go:89] found id: ""
	I1006 14:52:54.551170  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.551181  682995 logs.go:284] No container was found matching "kindnet"
	I1006 14:52:54.551194  682995 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:52:54.551220  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:52:54.615573  682995 logs.go:123] Gathering logs for container status ...
	I1006 14:52:54.615607  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:52:54.645703  682995 logs.go:123] Gathering logs for kubelet ...
	I1006 14:52:54.645732  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:52:54.709506  682995 logs.go:123] Gathering logs for dmesg ...
	I1006 14:52:54.709543  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:52:54.722963  682995 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:52:54.722997  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:52:54.783016  682995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1006 14:52:54.783054  682995 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:52:54.783107  682995 out.go:285] * 
	W1006 14:52:54.783182  682995 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.783200  682995 out.go:285] * 
	W1006 14:52:54.785658  682995 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:52:54.789273  682995 out.go:203] 
	W1006 14:52:54.790573  682995 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.790604  682995 out.go:285] * 
	I1006 14:52:54.791821  682995 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251887095Z" level=info msg="createCtr: removing container 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.251932195Z" level=info msg="createCtr: deleting container 4ccf5071d4a15329b25d201d70f0042454b12c8c9f251bd3ce8f5e7daa11b368 from storage" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:30 ha-481559 crio[777]: time="2025-10-06T14:54:30.2573433Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=2af7715c-4231-40ed-a841-9fbd70a525e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.222451545Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d2a61b85-604a-4a78-b4a0-a6ac7419591f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.223732465Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=756cc7eb-c750-461e-be90-ed96d3fbe167 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.22488018Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-481559/kube-controller-manager" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.225141708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.228812582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.229373513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.246307916Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.247725035Z" level=info msg="createCtr: deleting container ID 5cef11bd3bd8e3ab02e1ecc608a3fdc92d76230ae854ce7d96ffba97b455d556 from idIndex" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.247768884Z" level=info msg="createCtr: removing container 5cef11bd3bd8e3ab02e1ecc608a3fdc92d76230ae854ce7d96ffba97b455d556" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.247812016Z" level=info msg="createCtr: deleting container 5cef11bd3bd8e3ab02e1ecc608a3fdc92d76230ae854ce7d96ffba97b455d556 from storage" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:32 ha-481559 crio[777]: time="2025-10-06T14:54:32.249966611Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-481559_kube-system_5f3181798721fe8691d871f051785efc_0" id=15a4b8e4-4639-4fe3-b26e-d24edb5aaac3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:39 ha-481559 crio[777]: time="2025-10-06T14:54:39.221931099Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=88b1b334-98cc-4070-80bd-1eaef8fad396 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:39 ha-481559 crio[777]: time="2025-10-06T14:54:39.222963124Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b39b46cc-a2bf-4e00-88ba-09a709695d1b name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:54:39 ha-481559 crio[777]: time="2025-10-06T14:54:39.224022546Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-481559/kube-apiserver" id=d68945bb-971e-48cf-b920-5d7fe25f1da3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:39 ha-481559 crio[777]: time="2025-10-06T14:54:39.22429977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:39 ha-481559 crio[777]: time="2025-10-06T14:54:39.228174961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:39 ha-481559 crio[777]: time="2025-10-06T14:54:39.228645886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:54:39 ha-481559 crio[777]: time="2025-10-06T14:54:39.242094755Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d68945bb-971e-48cf-b920-5d7fe25f1da3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:39 ha-481559 crio[777]: time="2025-10-06T14:54:39.243442491Z" level=info msg="createCtr: deleting container ID bb56b5ee13366921fdc79c1d1851779827dd3bddce9752f19c84a7f2b609fd16 from idIndex" id=d68945bb-971e-48cf-b920-5d7fe25f1da3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:39 ha-481559 crio[777]: time="2025-10-06T14:54:39.243478798Z" level=info msg="createCtr: removing container bb56b5ee13366921fdc79c1d1851779827dd3bddce9752f19c84a7f2b609fd16" id=d68945bb-971e-48cf-b920-5d7fe25f1da3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:39 ha-481559 crio[777]: time="2025-10-06T14:54:39.243507669Z" level=info msg="createCtr: deleting container bb56b5ee13366921fdc79c1d1851779827dd3bddce9752f19c84a7f2b609fd16 from storage" id=d68945bb-971e-48cf-b920-5d7fe25f1da3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:54:39 ha-481559 crio[777]: time="2025-10-06T14:54:39.245657192Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-481559_kube-system_b4e1cca8a09d3789a7e0862428dfe0db_0" id=d68945bb-971e-48cf-b920-5d7fe25f1da3 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:54:39.361474    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:39.362106    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:39.363667    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:39.364125    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:54:39.365612    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:54:39 up  5:36,  0 user,  load average: 0.37, 0.12, 0.17
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257901    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:30 ha-481559 kubelet[1985]:         container kube-scheduler start failed in pod kube-scheduler-ha-481559_kube-system(cc93cb8d89afaa943672c70952b45174): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:30 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:30 ha-481559 kubelet[1985]: E1006 14:54:30.257947    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-481559" podUID="cc93cb8d89afaa943672c70952b45174"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.221850    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.250411    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:32 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:32 ha-481559 kubelet[1985]:  > podSandboxID="ed93c32f27ea2f50c71693ae2d5854b0e5ace377e978db1e844e55a1b66c855a"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.250537    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:32 ha-481559 kubelet[1985]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-481559_kube-system(5f3181798721fe8691d871f051785efc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:32 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:32 ha-481559 kubelet[1985]: E1006 14:54:32.250577    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-481559" podUID="5f3181798721fe8691d871f051785efc"
	Oct 06 14:54:34 ha-481559 kubelet[1985]: E1006 14:54:34.245614    1985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	Oct 06 14:54:35 ha-481559 kubelet[1985]: E1006 14:54:35.862531    1985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 14:54:36 ha-481559 kubelet[1985]: I1006 14:54:36.040612    1985 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 14:54:36 ha-481559 kubelet[1985]: E1006 14:54:36.041041    1985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 14:54:39 ha-481559 kubelet[1985]: E1006 14:54:39.039322    1985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bee56630f6256  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,LastTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 14:54:39 ha-481559 kubelet[1985]: E1006 14:54:39.221521    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:54:39 ha-481559 kubelet[1985]: E1006 14:54:39.245907    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:54:39 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:39 ha-481559 kubelet[1985]:  > podSandboxID="cadd804367d6dcdba2fb49fe06e3c1db8b35e6ee5c505328925ae346d4cdb867"
	Oct 06 14:54:39 ha-481559 kubelet[1985]: E1006 14:54:39.246000    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:54:39 ha-481559 kubelet[1985]:         container kube-apiserver start failed in pod kube-apiserver-ha-481559_kube-system(b4e1cca8a09d3789a7e0862428dfe0db): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:54:39 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:54:39 ha-481559 kubelet[1985]: E1006 14:54:39.246030    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-481559" podUID="b4e1cca8a09d3789a7e0862428dfe0db"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 6 (294.72764ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:54:39.732959  693675 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 node start m02 --alsologtostderr -v 5: exit status 85 (58.824101ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:54:39.791094  693790 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:54:39.791401  693790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:39.791411  693790 out.go:374] Setting ErrFile to fd 2...
	I1006 14:54:39.791418  693790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:39.791632  693790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:54:39.791909  693790 mustload.go:65] Loading cluster: ha-481559
	I1006 14:54:39.792255  693790 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:54:39.794088  693790 out.go:203] 
	W1006 14:54:39.795539  693790 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1006 14:54:39.795555  693790 out.go:285] * 
	* 
	W1006 14:54:39.799795  693790 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:54:39.801181  693790 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:424: I1006 14:54:39.791094  693790 out.go:360] Setting OutFile to fd 1 ...
I1006 14:54:39.791401  693790 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:54:39.791411  693790 out.go:374] Setting ErrFile to fd 2...
I1006 14:54:39.791418  693790 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:54:39.791632  693790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
I1006 14:54:39.791909  693790 mustload.go:65] Loading cluster: ha-481559
I1006 14:54:39.792255  693790 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:54:39.794088  693790 out.go:203] 
W1006 14:54:39.795539  693790 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1006 14:54:39.795555  693790 out.go:285] * 
* 
W1006 14:54:39.799795  693790 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1006 14:54:39.801181  693790 out.go:203] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-linux-amd64 -p ha-481559 node start m02 --alsologtostderr -v 5": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5: exit status 6 (288.102794ms)

                                                
                                                
-- stdout --
	ha-481559
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:54:39.851053  693801 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:54:39.851295  693801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:39.851303  693801 out.go:374] Setting ErrFile to fd 2...
	I1006 14:54:39.851308  693801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:39.851509  693801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:54:39.851682  693801 out.go:368] Setting JSON to false
	I1006 14:54:39.851711  693801 mustload.go:65] Loading cluster: ha-481559
	I1006 14:54:39.851747  693801 notify.go:220] Checking for updates...
	I1006 14:54:39.852032  693801 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:54:39.852045  693801 status.go:174] checking status of ha-481559 ...
	I1006 14:54:39.852493  693801 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:54:39.872221  693801 status.go:371] ha-481559 host status = "Running" (err=<nil>)
	I1006 14:54:39.872267  693801 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:39.872562  693801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:54:39.889345  693801 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:39.889569  693801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:54:39.889603  693801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:54:39.906980  693801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:54:40.005285  693801 ssh_runner.go:195] Run: systemctl --version
	I1006 14:54:40.011587  693801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:54:40.024169  693801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:54:40.079356  693801 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:54:40.070028121 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1006 14:54:40.079800  693801 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:54:40.079827  693801 api_server.go:166] Checking apiserver status ...
	I1006 14:54:40.079864  693801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 14:54:40.089959  693801 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:54:40.089996  693801 status.go:463] ha-481559 apiserver status = Running (err=<nil>)
	I1006 14:54:40.090007  693801 status.go:176] ha-481559 status: &{Name:ha-481559 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1006 14:54:40.095330  629719 retry.go:31] will retry after 760.886285ms: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5: exit status 6 (289.709115ms)

                                                
                                                
-- stdout --
	ha-481559
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:54:40.902446  693920 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:54:40.902561  693920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:40.902570  693920 out.go:374] Setting ErrFile to fd 2...
	I1006 14:54:40.902574  693920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:40.902784  693920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:54:40.902969  693920 out.go:368] Setting JSON to false
	I1006 14:54:40.903001  693920 mustload.go:65] Loading cluster: ha-481559
	I1006 14:54:40.903127  693920 notify.go:220] Checking for updates...
	I1006 14:54:40.903398  693920 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:54:40.903419  693920 status.go:174] checking status of ha-481559 ...
	I1006 14:54:40.903857  693920 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:54:40.922427  693920 status.go:371] ha-481559 host status = "Running" (err=<nil>)
	I1006 14:54:40.922455  693920 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:40.922733  693920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:54:40.939910  693920 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:40.940248  693920 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:54:40.940294  693920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:54:40.957321  693920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:54:41.056843  693920 ssh_runner.go:195] Run: systemctl --version
	I1006 14:54:41.063134  693920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:54:41.075548  693920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:54:41.131465  693920 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:54:41.121050768 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1006 14:54:41.131882  693920 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:54:41.131906  693920 api_server.go:166] Checking apiserver status ...
	I1006 14:54:41.131938  693920 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 14:54:41.142163  693920 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:54:41.142183  693920 status.go:463] ha-481559 apiserver status = Running (err=<nil>)
	I1006 14:54:41.142198  693920 status.go:176] ha-481559 status: &{Name:ha-481559 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1006 14:54:41.147108  629719 retry.go:31] will retry after 1.926932744s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5: exit status 6 (289.181003ms)

                                                
                                                
-- stdout --
	ha-481559
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:54:43.119318  694050 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:54:43.119586  694050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:43.119596  694050 out.go:374] Setting ErrFile to fd 2...
	I1006 14:54:43.119601  694050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:43.119809  694050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:54:43.119999  694050 out.go:368] Setting JSON to false
	I1006 14:54:43.120031  694050 mustload.go:65] Loading cluster: ha-481559
	I1006 14:54:43.120162  694050 notify.go:220] Checking for updates...
	I1006 14:54:43.120371  694050 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:54:43.120385  694050 status.go:174] checking status of ha-481559 ...
	I1006 14:54:43.120840  694050 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:54:43.138950  694050 status.go:371] ha-481559 host status = "Running" (err=<nil>)
	I1006 14:54:43.138979  694050 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:43.139351  694050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:54:43.156460  694050 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:43.156756  694050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:54:43.156827  694050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:54:43.173671  694050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:54:43.272541  694050 ssh_runner.go:195] Run: systemctl --version
	I1006 14:54:43.279302  694050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:54:43.291552  694050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:54:43.348652  694050 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:54:43.338846824 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1006 14:54:43.349079  694050 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:54:43.349104  694050 api_server.go:166] Checking apiserver status ...
	I1006 14:54:43.349146  694050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 14:54:43.359178  694050 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:54:43.359200  694050 status.go:463] ha-481559 apiserver status = Running (err=<nil>)
	I1006 14:54:43.359232  694050 status.go:176] ha-481559 status: &{Name:ha-481559 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1006 14:54:43.364724  629719 retry.go:31] will retry after 2.710691533s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5: exit status 6 (289.082847ms)

                                                
                                                
-- stdout --
	ha-481559
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:54:46.120386  694172 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:54:46.120480  694172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:46.120484  694172 out.go:374] Setting ErrFile to fd 2...
	I1006 14:54:46.120488  694172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:46.120704  694172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:54:46.120872  694172 out.go:368] Setting JSON to false
	I1006 14:54:46.120901  694172 mustload.go:65] Loading cluster: ha-481559
	I1006 14:54:46.120989  694172 notify.go:220] Checking for updates...
	I1006 14:54:46.121301  694172 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:54:46.121320  694172 status.go:174] checking status of ha-481559 ...
	I1006 14:54:46.121841  694172 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:54:46.142902  694172 status.go:371] ha-481559 host status = "Running" (err=<nil>)
	I1006 14:54:46.142949  694172 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:46.143338  694172 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:54:46.159866  694172 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:46.160092  694172 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:54:46.160145  694172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:54:46.177059  694172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:54:46.275380  694172 ssh_runner.go:195] Run: systemctl --version
	I1006 14:54:46.281716  694172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:54:46.293580  694172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:54:46.350677  694172 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:54:46.340674374 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1006 14:54:46.351170  694172 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:54:46.351219  694172 api_server.go:166] Checking apiserver status ...
	I1006 14:54:46.351262  694172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 14:54:46.361714  694172 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:54:46.361740  694172 status.go:463] ha-481559 apiserver status = Running (err=<nil>)
	I1006 14:54:46.361753  694172 status.go:176] ha-481559 status: &{Name:ha-481559 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1006 14:54:46.366940  629719 retry.go:31] will retry after 2.260142722s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5: exit status 6 (291.435022ms)

                                                
                                                
-- stdout --
	ha-481559
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:54:48.672747  694301 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:54:48.673004  694301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:48.673012  694301 out.go:374] Setting ErrFile to fd 2...
	I1006 14:54:48.673017  694301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:48.673199  694301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:54:48.673387  694301 out.go:368] Setting JSON to false
	I1006 14:54:48.673419  694301 mustload.go:65] Loading cluster: ha-481559
	I1006 14:54:48.673454  694301 notify.go:220] Checking for updates...
	I1006 14:54:48.673793  694301 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:54:48.673811  694301 status.go:174] checking status of ha-481559 ...
	I1006 14:54:48.674382  694301 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:54:48.694917  694301 status.go:371] ha-481559 host status = "Running" (err=<nil>)
	I1006 14:54:48.694945  694301 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:48.695288  694301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:54:48.713873  694301 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:48.714115  694301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:54:48.714182  694301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:54:48.731427  694301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:54:48.829126  694301 ssh_runner.go:195] Run: systemctl --version
	I1006 14:54:48.835149  694301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:54:48.847153  694301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:54:48.905253  694301 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:54:48.895253915 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1006 14:54:48.905680  694301 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:54:48.905707  694301 api_server.go:166] Checking apiserver status ...
	I1006 14:54:48.905740  694301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 14:54:48.915797  694301 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:54:48.915816  694301 status.go:463] ha-481559 apiserver status = Running (err=<nil>)
	I1006 14:54:48.915826  694301 status.go:176] ha-481559 status: &{Name:ha-481559 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1006 14:54:48.921047  629719 retry.go:31] will retry after 2.571673541s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5: exit status 6 (292.233585ms)

                                                
                                                
-- stdout --
	ha-481559
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:54:51.537997  694414 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:54:51.538150  694414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:51.538164  694414 out.go:374] Setting ErrFile to fd 2...
	I1006 14:54:51.538171  694414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:54:51.538405  694414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:54:51.538584  694414 out.go:368] Setting JSON to false
	I1006 14:54:51.538613  694414 mustload.go:65] Loading cluster: ha-481559
	I1006 14:54:51.538732  694414 notify.go:220] Checking for updates...
	I1006 14:54:51.538946  694414 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:54:51.538963  694414 status.go:174] checking status of ha-481559 ...
	I1006 14:54:51.539403  694414 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:54:51.557800  694414 status.go:371] ha-481559 host status = "Running" (err=<nil>)
	I1006 14:54:51.557828  694414 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:51.558250  694414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:54:51.575417  694414 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:54:51.575662  694414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:54:51.575698  694414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:54:51.593289  694414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:54:51.692987  694414 ssh_runner.go:195] Run: systemctl --version
	I1006 14:54:51.699330  694414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:54:51.712402  694414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:54:51.769653  694414 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:54:51.759814004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1006 14:54:51.770183  694414 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:54:51.770234  694414 api_server.go:166] Checking apiserver status ...
	I1006 14:54:51.770291  694414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 14:54:51.780638  694414 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:54:51.780660  694414 status.go:463] ha-481559 apiserver status = Running (err=<nil>)
	I1006 14:54:51.780674  694414 status.go:176] ha-481559 status: &{Name:ha-481559 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1006 14:54:51.786120  629719 retry.go:31] will retry after 9.814832525s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5: exit status 6 (296.776435ms)

                                                
                                                
-- stdout --
	ha-481559
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:55:01.646994  694583 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:55:01.647120  694583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:55:01.647129  694583 out.go:374] Setting ErrFile to fd 2...
	I1006 14:55:01.647133  694583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:55:01.647333  694583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:55:01.647501  694583 out.go:368] Setting JSON to false
	I1006 14:55:01.647530  694583 mustload.go:65] Loading cluster: ha-481559
	I1006 14:55:01.647557  694583 notify.go:220] Checking for updates...
	I1006 14:55:01.647873  694583 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:55:01.647888  694583 status.go:174] checking status of ha-481559 ...
	I1006 14:55:01.648377  694583 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:01.667130  694583 status.go:371] ha-481559 host status = "Running" (err=<nil>)
	I1006 14:55:01.667183  694583 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:55:01.667536  694583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:55:01.686333  694583 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:55:01.686631  694583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:55:01.686689  694583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:01.705072  694583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:01.805538  694583 ssh_runner.go:195] Run: systemctl --version
	I1006 14:55:01.811792  694583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:55:01.824330  694583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:55:01.882522  694583 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:55:01.871829686 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1006 14:55:01.883000  694583 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:55:01.883027  694583 api_server.go:166] Checking apiserver status ...
	I1006 14:55:01.883064  694583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 14:55:01.893262  694583 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:55:01.893282  694583 status.go:463] ha-481559 apiserver status = Running (err=<nil>)
	I1006 14:55:01.893293  694583 status.go:176] ha-481559 status: &{Name:ha-481559 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1006 14:55:01.898965  629719 retry.go:31] will retry after 14.536689316s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5: exit status 6 (299.481097ms)

                                                
                                                
-- stdout --
	ha-481559
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:55:16.486099  694772 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:55:16.486413  694772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:55:16.486424  694772 out.go:374] Setting ErrFile to fd 2...
	I1006 14:55:16.486430  694772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:55:16.486624  694772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:55:16.486840  694772 out.go:368] Setting JSON to false
	I1006 14:55:16.486880  694772 mustload.go:65] Loading cluster: ha-481559
	I1006 14:55:16.486974  694772 notify.go:220] Checking for updates...
	I1006 14:55:16.487314  694772 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:55:16.487334  694772 status.go:174] checking status of ha-481559 ...
	I1006 14:55:16.487792  694772 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:16.508878  694772 status.go:371] ha-481559 host status = "Running" (err=<nil>)
	I1006 14:55:16.508908  694772 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:55:16.509171  694772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:55:16.527161  694772 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:55:16.527581  694772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:55:16.527633  694772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:16.545449  694772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:16.645817  694772 ssh_runner.go:195] Run: systemctl --version
	I1006 14:55:16.652329  694772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:55:16.664729  694772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:55:16.724705  694772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:55:16.713822442 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1006 14:55:16.725242  694772 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:55:16.725274  694772 api_server.go:166] Checking apiserver status ...
	I1006 14:55:16.725320  694772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 14:55:16.735785  694772 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:55:16.735810  694772 status.go:463] ha-481559 apiserver status = Running (err=<nil>)
	I1006 14:55:16.735825  694772 status.go:176] ha-481559 status: &{Name:ha-481559 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1006 14:55:16.741196  629719 retry.go:31] will retry after 11.382964831s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5: exit status 6 (295.008695ms)

                                                
                                                
-- stdout --
	ha-481559
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:55:28.171125  694954 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:55:28.171442  694954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:55:28.171454  694954 out.go:374] Setting ErrFile to fd 2...
	I1006 14:55:28.171459  694954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:55:28.171677  694954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:55:28.171854  694954 out.go:368] Setting JSON to false
	I1006 14:55:28.171889  694954 mustload.go:65] Loading cluster: ha-481559
	I1006 14:55:28.172025  694954 notify.go:220] Checking for updates...
	I1006 14:55:28.172440  694954 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:55:28.172465  694954 status.go:174] checking status of ha-481559 ...
	I1006 14:55:28.173039  694954 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:28.191581  694954 status.go:371] ha-481559 host status = "Running" (err=<nil>)
	I1006 14:55:28.191616  694954 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:55:28.191907  694954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:55:28.209369  694954 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:55:28.209633  694954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:55:28.209684  694954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:28.227064  694954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:28.326671  694954 ssh_runner.go:195] Run: systemctl --version
	I1006 14:55:28.333097  694954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:55:28.345389  694954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:55:28.406301  694954 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:55:28.395735737 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1006 14:55:28.406721  694954 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:55:28.406751  694954 api_server.go:166] Checking apiserver status ...
	I1006 14:55:28.406785  694954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 14:55:28.417063  694954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:55:28.417084  694954 status.go:463] ha-481559 apiserver status = Running (err=<nil>)
	I1006 14:55:28.417095  694954 status.go:176] ha-481559 status: &{Name:ha-481559 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 683567,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:44:39.660699919Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7effae92997970d320561b0b86c210815b18a55d65bd555e1bff50158ed38adc",
	            "SandboxKey": "/var/run/docker/netns/7effae929979",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32883"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32884"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:f3:45:3f:5b:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "b8540561692606ad815fcdb4502c1e36a16141413d3697f4cf48668502930e4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 6 (288.90179ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:55:28.714241  695075 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image ls                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ delete  │ -p functional-135520                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │ 06 Oct 25 14:44 UTC │
	│ start   │ ha-481559 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- rollout status deployment/busybox                                                          │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node add --alsologtostderr -v 5                                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node stop m02 --alsologtostderr -v 5                                                                  │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node start m02 --alsologtostderr -v 5                                                                 │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:44:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:44:34.230587  682995 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:44:34.230719  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230728  682995 out.go:374] Setting ErrFile to fd 2...
	I1006 14:44:34.230733  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230969  682995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:44:34.231523  682995 out.go:368] Setting JSON to false
	I1006 14:44:34.232538  682995 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19610,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:44:34.232651  682995 start.go:140] virtualization: kvm guest
	I1006 14:44:34.235278  682995 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:44:34.236668  682995 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:44:34.236708  682995 notify.go:220] Checking for updates...
	I1006 14:44:34.239256  682995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:44:34.240475  682995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:44:34.242249  682995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:44:34.243577  682995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:44:34.244737  682995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:44:34.246267  682995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:44:34.271626  682995 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:44:34.271783  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.334697  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.323928193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.334819  682995 docker.go:318] overlay module found
	I1006 14:44:34.336770  682995 out.go:179] * Using the docker driver based on user configuration
	I1006 14:44:34.338109  682995 start.go:304] selected driver: docker
	I1006 14:44:34.338130  682995 start.go:924] validating driver "docker" against <nil>
	I1006 14:44:34.338144  682995 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:44:34.338750  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.398314  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.387376197 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.398587  682995 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:44:34.399080  682995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:44:34.401095  682995 out.go:179] * Using Docker driver with root privileges
	I1006 14:44:34.402283  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:34.402367  682995 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1006 14:44:34.402383  682995 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 14:44:34.402476  682995 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1006 14:44:34.403829  682995 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:44:34.404899  682995 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:44:34.406166  682995 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:44:34.407227  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.407272  682995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:44:34.407284  682995 cache.go:58] Caching tarball of preloaded images
	I1006 14:44:34.407376  682995 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:44:34.407382  682995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:44:34.407387  682995 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:44:34.407757  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:34.407793  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json: {Name:mkefd90ec0b9eae63c82d60bab053cdf7b5d9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:34.429193  682995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:44:34.429233  682995 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:44:34.429254  682995 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:44:34.429296  682995 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:44:34.429397  682995 start.go:364] duration metric: took 84.055µs to acquireMachinesLock for "ha-481559"
	I1006 14:44:34.429421  682995 start.go:93] Provisioning new machine with config: &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:44:34.429503  682995 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:44:34.431456  682995 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 14:44:34.431692  682995 start.go:159] libmachine.API.Create for "ha-481559" (driver="docker")
	I1006 14:44:34.431725  682995 client.go:168] LocalClient.Create starting
	I1006 14:44:34.431791  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem
	I1006 14:44:34.431825  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431843  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.431939  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem
	I1006 14:44:34.431977  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431994  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.432416  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:44:34.449965  682995 cli_runner.go:211] docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:44:34.450053  682995 network_create.go:284] running [docker network inspect ha-481559] to gather additional debugging logs...
	I1006 14:44:34.450071  682995 cli_runner.go:164] Run: docker network inspect ha-481559
	W1006 14:44:34.468682  682995 cli_runner.go:211] docker network inspect ha-481559 returned with exit code 1
	I1006 14:44:34.468713  682995 network_create.go:287] error running [docker network inspect ha-481559]: docker network inspect ha-481559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-481559 not found
	I1006 14:44:34.468724  682995 network_create.go:289] output of [docker network inspect ha-481559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-481559 not found
	
	** /stderr **
	I1006 14:44:34.468902  682995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:34.488223  682995 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca2540}
	I1006 14:44:34.488276  682995 network_create.go:124] attempt to create docker network ha-481559 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:44:34.488338  682995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-481559 ha-481559
	I1006 14:44:34.548630  682995 network_create.go:108] docker network ha-481559 192.168.49.0/24 created
	I1006 14:44:34.548669  682995 kic.go:121] calculated static IP "192.168.49.2" for the "ha-481559" container
	I1006 14:44:34.548729  682995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:44:34.566959  682995 cli_runner.go:164] Run: docker volume create ha-481559 --label name.minikube.sigs.k8s.io=ha-481559 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:44:34.586001  682995 oci.go:103] Successfully created a docker volume ha-481559
	I1006 14:44:34.586088  682995 cli_runner.go:164] Run: docker run --rm --name ha-481559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --entrypoint /usr/bin/test -v ha-481559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:44:34.994169  682995 oci.go:107] Successfully prepared a docker volume ha-481559
	I1006 14:44:34.994233  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.994280  682995 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:44:34.994349  682995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:44:39.551248  682995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.556814521s)
	I1006 14:44:39.551287  682995 kic.go:203] duration metric: took 4.557022471s to extract preloaded images to volume ...
	W1006 14:44:39.551374  682995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1006 14:44:39.551406  682995 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1006 14:44:39.551451  682995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:44:39.608040  682995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-481559 --name ha-481559 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-481559 --network ha-481559 --ip 192.168.49.2 --volume ha-481559:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:44:39.865946  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Running}}
	I1006 14:44:39.883061  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:39.901066  682995 cli_runner.go:164] Run: docker exec ha-481559 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:44:39.951869  682995 oci.go:144] the created container "ha-481559" has a running status.
	I1006 14:44:39.951908  682995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa...
	I1006 14:44:40.176341  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 14:44:40.176392  682995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:44:40.205643  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.227924  682995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:44:40.227948  682995 kic_runner.go:114] Args: [docker exec --privileged ha-481559 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:44:40.277808  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.297063  682995 machine.go:93] provisionDockerMachine start ...
	I1006 14:44:40.297156  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.315828  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.316109  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.316124  682995 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:44:40.461735  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.461771  682995 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:44:40.461843  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.481222  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.481551  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.481575  682995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:44:40.636624  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.636709  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.655017  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.655283  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.655302  682995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:44:40.801276  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:44:40.801313  682995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:44:40.801332  682995 ubuntu.go:190] setting up certificates
	I1006 14:44:40.801344  682995 provision.go:84] configureAuth start
	I1006 14:44:40.801398  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:40.819000  682995 provision.go:143] copyHostCerts
	I1006 14:44:40.819052  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819089  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:44:40.819099  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819169  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:44:40.819281  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819304  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:44:40.819309  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819338  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:44:40.819400  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819416  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:44:40.819428  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819460  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:44:40.819525  682995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:44:40.896257  682995 provision.go:177] copyRemoteCerts
	I1006 14:44:40.896328  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:44:40.896370  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.914092  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.016898  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:44:41.016969  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:44:41.037131  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:44:41.037215  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:44:41.055180  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:44:41.055258  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:44:41.073045  682995 provision.go:87] duration metric: took 271.684433ms to configureAuth
	I1006 14:44:41.073074  682995 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:44:41.073312  682995 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:44:41.073456  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.092548  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:41.092838  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:41.092869  682995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:44:41.356221  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:44:41.356247  682995 machine.go:96] duration metric: took 1.059160507s to provisionDockerMachine
	I1006 14:44:41.356259  682995 client.go:171] duration metric: took 6.924524382s to LocalClient.Create
	I1006 14:44:41.356282  682995 start.go:167] duration metric: took 6.924591304s to libmachine.API.Create "ha-481559"
	I1006 14:44:41.356295  682995 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:44:41.356322  682995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:44:41.356396  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:44:41.356453  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.374424  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.479545  682995 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:44:41.483318  682995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:44:41.483345  682995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:44:41.483356  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:44:41.483402  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:44:41.483499  682995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:44:41.483510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:44:41.483603  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:44:41.491409  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:41.511609  682995 start.go:296] duration metric: took 155.29938ms for postStartSetup
	I1006 14:44:41.511914  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.529867  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:41.530158  682995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:44:41.530223  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.547995  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.647810  682995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:44:41.652637  682995 start.go:128] duration metric: took 7.223117194s to createHost
	I1006 14:44:41.652662  682995 start.go:83] releasing machines lock for "ha-481559", held for 7.223254897s
	I1006 14:44:41.652730  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.670486  682995 ssh_runner.go:195] Run: cat /version.json
	I1006 14:44:41.670511  682995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:44:41.670555  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.670581  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.689278  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.689801  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.845142  682995 ssh_runner.go:195] Run: systemctl --version
	I1006 14:44:41.852333  682995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:44:41.886799  682995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:44:41.891575  682995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:44:41.891645  682995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:44:41.918020  682995 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 14:44:41.918049  682995 start.go:495] detecting cgroup driver to use...
	I1006 14:44:41.918088  682995 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:44:41.918148  682995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:44:41.934827  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:44:41.946573  682995 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:44:41.946626  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:44:41.961811  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:44:41.978333  682995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:44:42.056893  682995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:44:42.140645  682995 docker.go:234] disabling docker service ...
	I1006 14:44:42.140713  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:44:42.159372  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:44:42.171857  682995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:44:42.255908  682995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:44:42.340081  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:44:42.352916  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:44:42.367142  682995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:44:42.367215  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.377866  682995 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:44:42.377939  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.387157  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.395944  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.404768  682995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:44:42.412712  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.420910  682995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.434108  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.442895  682995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:44:42.450289  682995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:44:42.457667  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:42.535385  682995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:44:42.643348  682995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:44:42.643424  682995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:44:42.647404  682995 start.go:563] Will wait 60s for crictl version
	I1006 14:44:42.647467  682995 ssh_runner.go:195] Run: which crictl
	I1006 14:44:42.651000  682995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:44:42.675962  682995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:44:42.676044  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.705541  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.736773  682995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:44:42.738090  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:42.754892  682995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:44:42.759274  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.770415  682995 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:44:42.770534  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:42.770581  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.805187  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.805221  682995 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:44:42.805274  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.831096  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.831123  682995 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:44:42.831132  682995 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:44:42.831244  682995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:44:42.831321  682995 ssh_runner.go:195] Run: crio config
	I1006 14:44:42.877768  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:42.877790  682995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:44:42.877819  682995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:44:42.877840  682995 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:44:42.877966  682995 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:44:42.877993  682995 kube-vip.go:115] generating kube-vip config ...
	I1006 14:44:42.878035  682995 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1006 14:44:42.890886  682995 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:44:42.890995  682995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1006 14:44:42.891046  682995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:44:42.899063  682995 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:44:42.899132  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1006 14:44:42.906926  682995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:44:42.919358  682995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:44:42.934141  682995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:44:42.945961  682995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1006 14:44:42.959489  682995 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1006 14:44:42.962953  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.972760  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:43.053996  682995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:44:43.077665  682995 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:44:43.077692  682995 certs.go:195] generating shared ca certs ...
	I1006 14:44:43.077714  682995 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.077856  682995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:44:43.077899  682995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:44:43.077909  682995 certs.go:257] generating profile certs ...
	I1006 14:44:43.077963  682995 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:44:43.077983  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt with IP's: []
	I1006 14:44:43.259387  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt ...
	I1006 14:44:43.259418  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt: {Name:mk058803c7a7f0f2aa3fb547a3aafbba9518c3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.259607  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key ...
	I1006 14:44:43.259619  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key: {Name:mk0ae3492597f7c1edf0d7262770452fa244a40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.265151  682995 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710
	I1006 14:44:43.265175  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1006 14:44:43.807062  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 ...
	I1006 14:44:43.807095  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710: {Name:mk30dd14f07a4b732bb60853cc2fd5f84f73e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807283  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 ...
	I1006 14:44:43.807298  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710: {Name:mkf3f5fbdf7957143c03cb611320a2e02acb94c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807374  682995 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:44:43.807489  682995 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:44:43.807558  682995 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:44:43.807574  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt with IP's: []
	I1006 14:44:43.994115  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt ...
	I1006 14:44:43.994149  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt: {Name:mk715c6902e25626016d7eb8fdb7b52f0fdce895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994338  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key ...
	I1006 14:44:43.994350  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key: {Name:mka438ddf42b96ca34511dda1ce60f08f1d48b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994429  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:44:43.994449  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:44:43.994460  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:44:43.994470  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:44:43.994480  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:44:43.994490  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:44:43.994510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:44:43.994522  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:44:43.994570  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:44:43.994617  682995 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:44:43.994630  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:44:43.994653  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:44:43.994674  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:44:43.994701  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:44:43.994739  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:43.994772  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:44:43.994786  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:43.994798  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:44:43.995423  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:44:44.014422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:44:44.032422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:44:44.050727  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:44:44.068490  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:44:44.085540  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:44:44.102941  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:44:44.121043  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:44:44.139583  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:44:44.159654  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:44:44.176939  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:44:44.194332  682995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:44:44.207641  682995 ssh_runner.go:195] Run: openssl version
	I1006 14:44:44.214349  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:44:44.223426  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227339  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227401  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.261578  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:44:44.270472  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:44:44.279083  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282749  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282813  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.316484  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:44:44.325228  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:44:44.334098  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.337988  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.338051  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.371914  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:44:44.380847  682995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:44:44.384643  682995 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:44:44.384694  682995 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:44:44.384758  682995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:44:44.384823  682995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:44:44.413083  682995 cri.go:89] found id: ""
	I1006 14:44:44.413145  682995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:44:44.421446  682995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:44:44.429380  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:44:44.429431  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:44:44.437643  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:44:44.437667  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:44:44.437726  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:44:44.445948  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:44:44.446021  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:44:44.453451  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:44:44.460986  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:44:44.461064  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:44:44.468259  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.475830  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:44:44.475882  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.483080  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:44:44.490569  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:44:44.490632  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:44:44.498056  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:44:44.560210  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:44:44.618315  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:48:49.762009  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:48:49.762136  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:48:49.765019  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:49.765065  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:49.765142  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:49.765192  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:49.765263  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:49.765329  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:49.765384  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:49.765424  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:49.765465  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:49.765507  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:49.765557  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:49.765644  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:49.765713  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:49.765816  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:49.765897  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:49.765974  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:49.766033  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:49.768189  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:49.768304  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:49.768391  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:49.768495  682995 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:48:49.768546  682995 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:48:49.768600  682995 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:48:49.768641  682995 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:48:49.768684  682995 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:48:49.768778  682995 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.768847  682995 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:48:49.768982  682995 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.769042  682995 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:48:49.769108  682995 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:48:49.769166  682995 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:48:49.769263  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:49.769339  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:49.769416  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:49.769489  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:49.769549  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:49.769601  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:49.769671  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:49.769753  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:49.771489  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:49.771577  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:49.771664  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:49.771742  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:49.771858  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:49.771974  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:49.772108  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:49.772220  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:49.772288  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:49.772413  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:49.772556  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:49.772647  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501252368s
	I1006 14:48:49.772772  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:49.772891  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:49.772971  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:49.773033  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:48:49.773108  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	I1006 14:48:49.773189  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	I1006 14:48:49.773304  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	I1006 14:48:49.773319  682995 kubeadm.go:318] 
	I1006 14:48:49.773407  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:48:49.773472  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:48:49.773545  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:48:49.773657  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:48:49.773771  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:48:49.773850  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:48:49.773891  682995 kubeadm.go:318] 
	W1006 14:48:49.774048  682995 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501252368s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:48:49.774147  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:48:52.524900  682995 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.75072398s)
	I1006 14:48:52.524985  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:48:52.538104  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:48:52.538173  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:48:52.546610  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:48:52.546639  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:48:52.546692  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:48:52.555271  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:48:52.555334  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:48:52.564502  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:48:52.572861  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:48:52.572925  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:48:52.580681  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.588574  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:48:52.588636  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.596314  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:48:52.604007  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:48:52.604073  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:48:52.611967  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:48:52.650794  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:52.650844  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:52.671446  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:52.671559  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:52.671628  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:52.671718  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:52.671766  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:52.671811  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:52.671850  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:52.671890  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:52.671928  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:52.671972  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:52.672010  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:52.732758  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:52.732876  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:52.732979  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:52.739914  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:52.743428  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:52.743535  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:52.743654  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:52.743727  682995 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:48:52.743777  682995 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:48:52.743861  682995 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:48:52.743911  682995 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:48:52.743985  682995 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:48:52.744055  682995 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:48:52.744143  682995 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:48:52.744228  682995 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:48:52.744266  682995 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:48:52.744323  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:53.107297  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:53.300701  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:53.503166  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:53.664024  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:53.725865  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:53.726293  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:53.728797  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:53.730586  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:53.730720  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:53.730830  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:53.730903  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:53.744534  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:53.744672  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:53.752267  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:53.752422  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:53.752505  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:53.852049  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:53.852226  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:54.353729  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.825241ms
	I1006 14:48:54.356542  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:54.356619  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:54.356695  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:54.356819  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:52:54.358331  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	I1006 14:52:54.358653  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	I1006 14:52:54.358853  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	I1006 14:52:54.358881  682995 kubeadm.go:318] 
	I1006 14:52:54.359059  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:52:54.359298  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:52:54.359539  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:52:54.359760  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:52:54.359952  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:52:54.360116  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:52:54.360148  682995 kubeadm.go:318] 
	I1006 14:52:54.363033  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:52:54.363163  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:52:54.363696  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:52:54.363761  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:52:54.363858  682995 kubeadm.go:402] duration metric: took 8m9.979166519s to StartCluster
	I1006 14:52:54.363946  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:52:54.364031  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:52:54.392579  682995 cri.go:89] found id: ""
	I1006 14:52:54.392622  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.392631  682995 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:52:54.392638  682995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:52:54.392693  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:52:54.420188  682995 cri.go:89] found id: ""
	I1006 14:52:54.420226  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.420237  682995 logs.go:284] No container was found matching "etcd"
	I1006 14:52:54.420245  682995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:52:54.420299  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:52:54.445694  682995 cri.go:89] found id: ""
	I1006 14:52:54.445723  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.445733  682995 logs.go:284] No container was found matching "coredns"
	I1006 14:52:54.445740  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:52:54.445791  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:52:54.471923  682995 cri.go:89] found id: ""
	I1006 14:52:54.471954  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.471962  682995 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:52:54.471971  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:52:54.472030  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:52:54.498805  682995 cri.go:89] found id: ""
	I1006 14:52:54.498836  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.498848  682995 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:52:54.498857  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:52:54.498922  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:52:54.524613  682995 cri.go:89] found id: ""
	I1006 14:52:54.524638  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.524646  682995 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:52:54.524652  682995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:52:54.524708  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:52:54.551140  682995 cri.go:89] found id: ""
	I1006 14:52:54.551170  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.551181  682995 logs.go:284] No container was found matching "kindnet"
	I1006 14:52:54.551194  682995 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:52:54.551220  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:52:54.615573  682995 logs.go:123] Gathering logs for container status ...
	I1006 14:52:54.615607  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:52:54.645703  682995 logs.go:123] Gathering logs for kubelet ...
	I1006 14:52:54.645732  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:52:54.709506  682995 logs.go:123] Gathering logs for dmesg ...
	I1006 14:52:54.709543  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:52:54.722963  682995 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:52:54.722997  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:52:54.783016  682995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1006 14:52:54.783054  682995 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:52:54.783107  682995 out.go:285] * 
	W1006 14:52:54.783182  682995 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.783200  682995 out.go:285] * 
	W1006 14:52:54.785658  682995 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:52:54.789273  682995 out.go:203] 
	W1006 14:52:54.790573  682995 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.790604  682995 out.go:285] * 
	I1006 14:52:54.791821  682995 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:55:19 ha-481559 crio[777]: time="2025-10-06T14:55:19.244579545Z" level=info msg="createCtr: removing container 6fb0dd024cf82df3265ea9cef1d9b2d64c39822575c13da9cef22a96fb1f2bd2" id=69df7c26-0bf6-4cda-a435-c80016e75229 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:19 ha-481559 crio[777]: time="2025-10-06T14:55:19.244621312Z" level=info msg="createCtr: deleting container 6fb0dd024cf82df3265ea9cef1d9b2d64c39822575c13da9cef22a96fb1f2bd2 from storage" id=69df7c26-0bf6-4cda-a435-c80016e75229 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:19 ha-481559 crio[777]: time="2025-10-06T14:55:19.24668136Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-481559_kube-system_5f3181798721fe8691d871f051785efc_0" id=69df7c26-0bf6-4cda-a435-c80016e75229 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:20 ha-481559 crio[777]: time="2025-10-06T14:55:20.223215431Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=3a183dea-aa77-4c6c-981d-529233ecd117 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:55:20 ha-481559 crio[777]: time="2025-10-06T14:55:20.224145811Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=662f9f35-d285-480d-9868-7dcd62d15739 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:55:20 ha-481559 crio[777]: time="2025-10-06T14:55:20.225117645Z" level=info msg="Creating container: kube-system/etcd-ha-481559/etcd" id=2915b120-42a8-414b-a2ad-d45f3daa81ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:20 ha-481559 crio[777]: time="2025-10-06T14:55:20.225368415Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:55:20 ha-481559 crio[777]: time="2025-10-06T14:55:20.228550709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:55:20 ha-481559 crio[777]: time="2025-10-06T14:55:20.228953397Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:55:20 ha-481559 crio[777]: time="2025-10-06T14:55:20.247311131Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2915b120-42a8-414b-a2ad-d45f3daa81ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:20 ha-481559 crio[777]: time="2025-10-06T14:55:20.248732341Z" level=info msg="createCtr: deleting container ID d5363c9de6dcae0be0af7cf75c888d7675f0b4f54a7825076382c8dc45623594 from idIndex" id=2915b120-42a8-414b-a2ad-d45f3daa81ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:20 ha-481559 crio[777]: time="2025-10-06T14:55:20.248778146Z" level=info msg="createCtr: removing container d5363c9de6dcae0be0af7cf75c888d7675f0b4f54a7825076382c8dc45623594" id=2915b120-42a8-414b-a2ad-d45f3daa81ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:20 ha-481559 crio[777]: time="2025-10-06T14:55:20.248815689Z" level=info msg="createCtr: deleting container d5363c9de6dcae0be0af7cf75c888d7675f0b4f54a7825076382c8dc45623594 from storage" id=2915b120-42a8-414b-a2ad-d45f3daa81ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:20 ha-481559 crio[777]: time="2025-10-06T14:55:20.250975685Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=2915b120-42a8-414b-a2ad-d45f3daa81ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.222343942Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d3faabe5-3e05-4a0d-a77d-cea625e0efde name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.223228731Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=9e6a7c11-f6cd-4ee3-b516-a6197c64cf8b name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.224164048Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-481559/kube-scheduler" id=69d896dd-3369-47d6-b61b-7e6979286ec2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.224404475Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.227669018Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.228080126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.24352531Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=69d896dd-3369-47d6-b61b-7e6979286ec2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.244947586Z" level=info msg="createCtr: deleting container ID 648df8e460bd6d4294793563cfec4e688534a4ad87449c157fe9c0ac46d3b2b0 from idIndex" id=69d896dd-3369-47d6-b61b-7e6979286ec2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.244994491Z" level=info msg="createCtr: removing container 648df8e460bd6d4294793563cfec4e688534a4ad87449c157fe9c0ac46d3b2b0" id=69d896dd-3369-47d6-b61b-7e6979286ec2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.24503395Z" level=info msg="createCtr: deleting container 648df8e460bd6d4294793563cfec4e688534a4ad87449c157fe9c0ac46d3b2b0 from storage" id=69d896dd-3369-47d6-b61b-7e6979286ec2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.247459358Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=69d896dd-3369-47d6-b61b-7e6979286ec2 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:55:29.317546    4650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:55:29.318163    4650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:55:29.319827    4650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:55:29.320294    4650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:55:29.321802    4650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:55:29 up  5:37,  0 user,  load average: 0.44, 0.18, 0.18
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:55:19 ha-481559 kubelet[1985]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-481559_kube-system(5f3181798721fe8691d871f051785efc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:55:19 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:55:19 ha-481559 kubelet[1985]: E1006 14:55:19.247170    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-481559" podUID="5f3181798721fe8691d871f051785efc"
	Oct 06 14:55:20 ha-481559 kubelet[1985]: E1006 14:55:20.222661    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:55:20 ha-481559 kubelet[1985]: E1006 14:55:20.251333    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:55:20 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:55:20 ha-481559 kubelet[1985]:  > podSandboxID="a7ce34bebe17bc556bee492a72e0243ebe86fdfcd40a6e28aafa4e286d225bc6"
	Oct 06 14:55:20 ha-481559 kubelet[1985]: E1006 14:55:20.251447    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:55:20 ha-481559 kubelet[1985]:         container etcd start failed in pod etcd-ha-481559_kube-system(520c6060936b1c2aac479c99ed6c0355): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:55:20 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:55:20 ha-481559 kubelet[1985]: E1006 14:55:20.251493    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	Oct 06 14:55:24 ha-481559 kubelet[1985]: E1006 14:55:24.249120    1985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	Oct 06 14:55:24 ha-481559 kubelet[1985]: E1006 14:55:24.870054    1985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 14:55:25 ha-481559 kubelet[1985]: I1006 14:55:25.055604    1985 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 14:55:25 ha-481559 kubelet[1985]: E1006 14:55:25.056041    1985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 14:55:25 ha-481559 kubelet[1985]: E1006 14:55:25.221848    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:55:25 ha-481559 kubelet[1985]: E1006 14:55:25.247799    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:55:25 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:55:25 ha-481559 kubelet[1985]:  > podSandboxID="28815a6c32deaa458111079bbac61f47b8e22f338f2282fab7d62077c8b07f1e"
	Oct 06 14:55:25 ha-481559 kubelet[1985]: E1006 14:55:25.247905    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:55:25 ha-481559 kubelet[1985]:         container kube-scheduler start failed in pod kube-scheduler-ha-481559_kube-system(cc93cb8d89afaa943672c70952b45174): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:55:25 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:55:25 ha-481559 kubelet[1985]: E1006 14:55:25.247936    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-481559" podUID="cc93cb8d89afaa943672c70952b45174"
	Oct 06 14:55:29 ha-481559 kubelet[1985]: E1006 14:55:29.045160    1985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bee56630f6256  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,LastTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 14:55:29 ha-481559 kubelet[1985]: E1006 14:55:29.079805    1985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 6 (292.402474ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:55:29.688496  695410 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (49.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-481559" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-481559\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-481559\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-481559\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-481559" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-481559\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-481559\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-481559\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 683567,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:44:39.660699919Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7effae92997970d320561b0b86c210815b18a55d65bd555e1bff50158ed38adc",
	            "SandboxKey": "/var/run/docker/netns/7effae929979",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32883"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32884"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32887"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32886"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:f3:45:3f:5b:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "b8540561692606ad815fcdb4502c1e36a16141413d3697f4cf48668502930e4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 6 (300.278526ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:55:30.329106  695662 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
E1006 14:55:30.512354  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr          │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ image   │ functional-135520 image ls                                                                                      │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:40 UTC │ 06 Oct 25 14:40 UTC │
	│ delete  │ -p functional-135520                                                                                            │ functional-135520 │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │ 06 Oct 25 14:44 UTC │
	│ start   │ ha-481559 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:44 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- rollout status deployment/busybox                                                          │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node add --alsologtostderr -v 5                                                                       │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node stop m02 --alsologtostderr -v 5                                                                  │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node start m02 --alsologtostderr -v 5                                                                 │ ha-481559         │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:44:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:44:34.230587  682995 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:44:34.230719  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230728  682995 out.go:374] Setting ErrFile to fd 2...
	I1006 14:44:34.230733  682995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:44:34.230969  682995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:44:34.231523  682995 out.go:368] Setting JSON to false
	I1006 14:44:34.232538  682995 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19610,"bootTime":1759742264,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:44:34.232651  682995 start.go:140] virtualization: kvm guest
	I1006 14:44:34.235278  682995 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:44:34.236668  682995 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:44:34.236708  682995 notify.go:220] Checking for updates...
	I1006 14:44:34.239256  682995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:44:34.240475  682995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:44:34.242249  682995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:44:34.243577  682995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:44:34.244737  682995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:44:34.246267  682995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:44:34.271626  682995 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:44:34.271783  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.334697  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.323928193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.334819  682995 docker.go:318] overlay module found
	I1006 14:44:34.336770  682995 out.go:179] * Using the docker driver based on user configuration
	I1006 14:44:34.338109  682995 start.go:304] selected driver: docker
	I1006 14:44:34.338130  682995 start.go:924] validating driver "docker" against <nil>
	I1006 14:44:34.338144  682995 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:44:34.338750  682995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:44:34.398314  682995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:44:34.387376197 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:44:34.398587  682995 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:44:34.399080  682995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:44:34.401095  682995 out.go:179] * Using Docker driver with root privileges
	I1006 14:44:34.402283  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:34.402367  682995 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1006 14:44:34.402383  682995 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 14:44:34.402476  682995 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1006 14:44:34.403829  682995 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:44:34.404899  682995 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:44:34.406166  682995 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:44:34.407227  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.407272  682995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:44:34.407284  682995 cache.go:58] Caching tarball of preloaded images
	I1006 14:44:34.407376  682995 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:44:34.407382  682995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:44:34.407387  682995 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:44:34.407757  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:34.407793  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json: {Name:mkefd90ec0b9eae63c82d60bab053cdf7b5d9b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:34.429193  682995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:44:34.429233  682995 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:44:34.429254  682995 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:44:34.429296  682995 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:44:34.429397  682995 start.go:364] duration metric: took 84.055µs to acquireMachinesLock for "ha-481559"
	I1006 14:44:34.429421  682995 start.go:93] Provisioning new machine with config: &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:44:34.429503  682995 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:44:34.431456  682995 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 14:44:34.431692  682995 start.go:159] libmachine.API.Create for "ha-481559" (driver="docker")
	I1006 14:44:34.431725  682995 client.go:168] LocalClient.Create starting
	I1006 14:44:34.431791  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem
	I1006 14:44:34.431825  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431843  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.431939  682995 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem
	I1006 14:44:34.431977  682995 main.go:141] libmachine: Decoding PEM data...
	I1006 14:44:34.431994  682995 main.go:141] libmachine: Parsing certificate...
	I1006 14:44:34.432416  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:44:34.449965  682995 cli_runner.go:211] docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:44:34.450053  682995 network_create.go:284] running [docker network inspect ha-481559] to gather additional debugging logs...
	I1006 14:44:34.450071  682995 cli_runner.go:164] Run: docker network inspect ha-481559
	W1006 14:44:34.468682  682995 cli_runner.go:211] docker network inspect ha-481559 returned with exit code 1
	I1006 14:44:34.468713  682995 network_create.go:287] error running [docker network inspect ha-481559]: docker network inspect ha-481559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-481559 not found
	I1006 14:44:34.468724  682995 network_create.go:289] output of [docker network inspect ha-481559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-481559 not found
	
	** /stderr **
	I1006 14:44:34.468902  682995 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:34.488223  682995 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca2540}
	I1006 14:44:34.488276  682995 network_create.go:124] attempt to create docker network ha-481559 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:44:34.488338  682995 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-481559 ha-481559
	I1006 14:44:34.548630  682995 network_create.go:108] docker network ha-481559 192.168.49.0/24 created
	I1006 14:44:34.548669  682995 kic.go:121] calculated static IP "192.168.49.2" for the "ha-481559" container
	I1006 14:44:34.548729  682995 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:44:34.566959  682995 cli_runner.go:164] Run: docker volume create ha-481559 --label name.minikube.sigs.k8s.io=ha-481559 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:44:34.586001  682995 oci.go:103] Successfully created a docker volume ha-481559
	I1006 14:44:34.586088  682995 cli_runner.go:164] Run: docker run --rm --name ha-481559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --entrypoint /usr/bin/test -v ha-481559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:44:34.994169  682995 oci.go:107] Successfully prepared a docker volume ha-481559
	I1006 14:44:34.994233  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:34.994280  682995 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:44:34.994349  682995 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:44:39.551248  682995 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-481559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.556814521s)
	I1006 14:44:39.551287  682995 kic.go:203] duration metric: took 4.557022471s to extract preloaded images to volume ...
	W1006 14:44:39.551374  682995 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1006 14:44:39.551406  682995 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1006 14:44:39.551451  682995 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:44:39.608040  682995 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-481559 --name ha-481559 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-481559 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-481559 --network ha-481559 --ip 192.168.49.2 --volume ha-481559:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:44:39.865946  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Running}}
	I1006 14:44:39.883061  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:39.901066  682995 cli_runner.go:164] Run: docker exec ha-481559 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:44:39.951869  682995 oci.go:144] the created container "ha-481559" has a running status.
	I1006 14:44:39.951908  682995 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa...
	I1006 14:44:40.176341  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 14:44:40.176392  682995 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:44:40.205643  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.227924  682995 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:44:40.227948  682995 kic_runner.go:114] Args: [docker exec --privileged ha-481559 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:44:40.277808  682995 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:44:40.297063  682995 machine.go:93] provisionDockerMachine start ...
	I1006 14:44:40.297156  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.315828  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.316109  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.316124  682995 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:44:40.461735  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.461771  682995 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:44:40.461843  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.481222  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.481551  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.481575  682995 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:44:40.636624  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:44:40.636709  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.655017  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:40.655283  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:40.655302  682995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:44:40.801276  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:44:40.801313  682995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:44:40.801332  682995 ubuntu.go:190] setting up certificates
	I1006 14:44:40.801344  682995 provision.go:84] configureAuth start
	I1006 14:44:40.801398  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:40.819000  682995 provision.go:143] copyHostCerts
	I1006 14:44:40.819052  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819089  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:44:40.819099  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:44:40.819169  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:44:40.819281  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819304  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:44:40.819309  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:44:40.819338  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:44:40.819400  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819416  682995 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:44:40.819428  682995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:44:40.819460  682995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:44:40.819525  682995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:44:40.896257  682995 provision.go:177] copyRemoteCerts
	I1006 14:44:40.896328  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:44:40.896370  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:40.914092  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.016898  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:44:41.016969  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:44:41.037131  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:44:41.037215  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:44:41.055180  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:44:41.055258  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:44:41.073045  682995 provision.go:87] duration metric: took 271.684433ms to configureAuth
	I1006 14:44:41.073074  682995 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:44:41.073312  682995 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:44:41.073456  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.092548  682995 main.go:141] libmachine: Using SSH client type: native
	I1006 14:44:41.092838  682995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32883 <nil> <nil>}
	I1006 14:44:41.092869  682995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:44:41.356221  682995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:44:41.356247  682995 machine.go:96] duration metric: took 1.059160507s to provisionDockerMachine
	I1006 14:44:41.356259  682995 client.go:171] duration metric: took 6.924524382s to LocalClient.Create
	I1006 14:44:41.356282  682995 start.go:167] duration metric: took 6.924591304s to libmachine.API.Create "ha-481559"
	I1006 14:44:41.356295  682995 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:44:41.356322  682995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:44:41.356396  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:44:41.356453  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.374424  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.479545  682995 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:44:41.483318  682995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:44:41.483345  682995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:44:41.483356  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:44:41.483402  682995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:44:41.483499  682995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:44:41.483510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:44:41.483603  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:44:41.491409  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:41.511609  682995 start.go:296] duration metric: took 155.29938ms for postStartSetup
	I1006 14:44:41.511914  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.529867  682995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:44:41.530158  682995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:44:41.530223  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.547995  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.647810  682995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:44:41.652637  682995 start.go:128] duration metric: took 7.223117194s to createHost
	I1006 14:44:41.652662  682995 start.go:83] releasing machines lock for "ha-481559", held for 7.223254897s
	I1006 14:44:41.652730  682995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:44:41.670486  682995 ssh_runner.go:195] Run: cat /version.json
	I1006 14:44:41.670511  682995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:44:41.670555  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.670581  682995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:44:41.689278  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.689801  682995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:44:41.845142  682995 ssh_runner.go:195] Run: systemctl --version
	I1006 14:44:41.852333  682995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:44:41.886799  682995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:44:41.891575  682995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:44:41.891645  682995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:44:41.918020  682995 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 14:44:41.918049  682995 start.go:495] detecting cgroup driver to use...
	I1006 14:44:41.918088  682995 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:44:41.918148  682995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:44:41.934827  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:44:41.946573  682995 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:44:41.946626  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:44:41.961811  682995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:44:41.978333  682995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:44:42.056893  682995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:44:42.140645  682995 docker.go:234] disabling docker service ...
	I1006 14:44:42.140713  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:44:42.159372  682995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:44:42.171857  682995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:44:42.255908  682995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:44:42.340081  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:44:42.352916  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:44:42.367142  682995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:44:42.367215  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.377866  682995 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:44:42.377939  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.387157  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.395944  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.404768  682995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:44:42.412712  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.420910  682995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.434108  682995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:44:42.442895  682995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:44:42.450289  682995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:44:42.457667  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:42.535385  682995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:44:42.643348  682995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:44:42.643424  682995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:44:42.647404  682995 start.go:563] Will wait 60s for crictl version
	I1006 14:44:42.647467  682995 ssh_runner.go:195] Run: which crictl
	I1006 14:44:42.651000  682995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:44:42.675962  682995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:44:42.676044  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.705541  682995 ssh_runner.go:195] Run: crio --version
	I1006 14:44:42.736773  682995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:44:42.738090  682995 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:44:42.754892  682995 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:44:42.759274  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.770415  682995 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:44:42.770534  682995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:44:42.770581  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.805187  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.805221  682995 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:44:42.805274  682995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:44:42.831096  682995 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:44:42.831123  682995 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:44:42.831132  682995 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:44:42.831244  682995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:44:42.831321  682995 ssh_runner.go:195] Run: crio config
	I1006 14:44:42.877768  682995 cni.go:84] Creating CNI manager for ""
	I1006 14:44:42.877790  682995 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:44:42.877819  682995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:44:42.877840  682995 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:44:42.877966  682995 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:44:42.877993  682995 kube-vip.go:115] generating kube-vip config ...
	I1006 14:44:42.878035  682995 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1006 14:44:42.890886  682995 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:44:42.890995  682995 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1006 14:44:42.891046  682995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:44:42.899063  682995 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:44:42.899132  682995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1006 14:44:42.906926  682995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:44:42.919358  682995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:44:42.934141  682995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:44:42.945961  682995 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1006 14:44:42.959489  682995 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1006 14:44:42.962953  682995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:44:42.972760  682995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:44:43.053996  682995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:44:43.077665  682995 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:44:43.077692  682995 certs.go:195] generating shared ca certs ...
	I1006 14:44:43.077714  682995 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.077856  682995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:44:43.077899  682995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:44:43.077909  682995 certs.go:257] generating profile certs ...
	I1006 14:44:43.077963  682995 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:44:43.077983  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt with IP's: []
	I1006 14:44:43.259387  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt ...
	I1006 14:44:43.259418  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt: {Name:mk058803c7a7f0f2aa3fb547a3aafbba9518c3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.259607  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key ...
	I1006 14:44:43.259619  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key: {Name:mk0ae3492597f7c1edf0d7262770452fa244a40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.265151  682995 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710
	I1006 14:44:43.265175  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1006 14:44:43.807062  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 ...
	I1006 14:44:43.807095  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710: {Name:mk30dd14f07a4b732bb60853cc2fd5f84f73e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807283  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 ...
	I1006 14:44:43.807298  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710: {Name:mkf3f5fbdf7957143c03cb611320a2e02acb94c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.807374  682995 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:44:43.807489  682995 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.6031b710 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:44:43.807558  682995 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:44:43.807574  682995 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt with IP's: []
	I1006 14:44:43.994115  682995 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt ...
	I1006 14:44:43.994149  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt: {Name:mk715c6902e25626016d7eb8fdb7b52f0fdce895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994338  682995 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key ...
	I1006 14:44:43.994350  682995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key: {Name:mka438ddf42b96ca34511dda1ce60f08f1d48b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:44:43.994429  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:44:43.994449  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:44:43.994460  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:44:43.994470  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:44:43.994480  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:44:43.994490  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:44:43.994510  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:44:43.994522  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:44:43.994570  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:44:43.994617  682995 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:44:43.994630  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:44:43.994653  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:44:43.994674  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:44:43.994701  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:44:43.994739  682995 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:44:43.994772  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:44:43.994786  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:43.994798  682995 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:44:43.995423  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:44:44.014422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:44:44.032422  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:44:44.050727  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:44:44.068490  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:44:44.085540  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:44:44.102941  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:44:44.121043  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:44:44.139583  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:44:44.159654  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:44:44.176939  682995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:44:44.194332  682995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:44:44.207641  682995 ssh_runner.go:195] Run: openssl version
	I1006 14:44:44.214349  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:44:44.223426  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227339  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.227401  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:44:44.261578  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:44:44.270472  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:44:44.279083  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282749  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.282813  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:44:44.316484  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:44:44.325228  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:44:44.334098  682995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.337988  682995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.338051  682995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:44:44.371914  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:44:44.380847  682995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:44:44.384643  682995 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:44:44.384694  682995 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:44:44.384758  682995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:44:44.384823  682995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:44:44.413083  682995 cri.go:89] found id: ""
	I1006 14:44:44.413145  682995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:44:44.421446  682995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:44:44.429380  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:44:44.429431  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:44:44.437643  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:44:44.437667  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:44:44.437726  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:44:44.445948  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:44:44.446021  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:44:44.453451  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:44:44.460986  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:44:44.461064  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:44:44.468259  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.475830  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:44:44.475882  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:44:44.483080  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:44:44.490569  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:44:44.490632  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:44:44.498056  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:44:44.560210  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:44:44.618315  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:48:49.762009  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:48:49.762136  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:48:49.765019  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:49.765065  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:49.765142  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:49.765192  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:49.765263  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:49.765329  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:49.765384  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:49.765424  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:49.765465  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:49.765507  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:49.765557  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:49.765644  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:49.765713  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:49.765816  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:49.765897  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:49.765974  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:49.766033  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:49.768189  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:49.768304  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:49.768391  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:49.768495  682995 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:48:49.768546  682995 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:48:49.768600  682995 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:48:49.768641  682995 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:48:49.768684  682995 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:48:49.768778  682995 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.768847  682995 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:48:49.768982  682995 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:48:49.769042  682995 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:48:49.769108  682995 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:48:49.769166  682995 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:48:49.769263  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:49.769339  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:49.769416  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:49.769489  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:49.769549  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:49.769601  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:49.769671  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:49.769753  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:49.771489  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:49.771577  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:49.771664  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:49.771742  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:49.771858  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:49.771974  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:49.772108  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:49.772220  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:49.772288  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:49.772413  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:49.772556  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:49.772647  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501252368s
	I1006 14:48:49.772772  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:49.772891  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:49.772971  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:49.773033  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:48:49.773108  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	I1006 14:48:49.773189  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	I1006 14:48:49.773304  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	I1006 14:48:49.773319  682995 kubeadm.go:318] 
	I1006 14:48:49.773407  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:48:49.773472  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:48:49.773545  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:48:49.773657  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:48:49.773771  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:48:49.773850  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:48:49.773891  682995 kubeadm.go:318] 
	W1006 14:48:49.774048  682995 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-481559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501252368s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001319326s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001358761s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001281021s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 14:48:49.774147  682995 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 14:48:52.524900  682995 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.75072398s)
	I1006 14:48:52.524985  682995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:48:52.538104  682995 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:48:52.538173  682995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:48:52.546610  682995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:48:52.546639  682995 kubeadm.go:157] found existing configuration files:
	
	I1006 14:48:52.546692  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:48:52.555271  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:48:52.555334  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:48:52.564502  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:48:52.572861  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:48:52.572925  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:48:52.580681  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.588574  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:48:52.588636  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.596314  682995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:48:52.604007  682995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:48:52.604073  682995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:48:52.611967  682995 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:48:52.650794  682995 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:48:52.650844  682995 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:48:52.671446  682995 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:48:52.671559  682995 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 14:48:52.671628  682995 kubeadm.go:318] OS: Linux
	I1006 14:48:52.671718  682995 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:48:52.671766  682995 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:48:52.671811  682995 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:48:52.671850  682995 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:48:52.671890  682995 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:48:52.671928  682995 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:48:52.671972  682995 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:48:52.672010  682995 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 14:48:52.732758  682995 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:48:52.732876  682995 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:48:52.732979  682995 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:48:52.739914  682995 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:48:52.743428  682995 out.go:252]   - Generating certificates and keys ...
	I1006 14:48:52.743535  682995 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:48:52.743654  682995 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:48:52.743727  682995 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 14:48:52.743777  682995 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 14:48:52.743861  682995 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 14:48:52.743911  682995 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 14:48:52.743985  682995 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 14:48:52.744055  682995 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 14:48:52.744143  682995 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 14:48:52.744228  682995 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 14:48:52.744266  682995 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 14:48:52.744323  682995 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:48:53.107297  682995 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:48:53.300701  682995 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:48:53.503166  682995 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:48:53.664024  682995 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:48:53.725865  682995 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:48:53.726293  682995 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:48:53.728797  682995 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:48:53.730586  682995 out.go:252]   - Booting up control plane ...
	I1006 14:48:53.730720  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:48:53.730830  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:48:53.730903  682995 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:48:53.744534  682995 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:48:53.744672  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:48:53.752267  682995 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:48:53.752422  682995 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:48:53.752505  682995 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:48:53.852049  682995 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:48:53.852226  682995 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:48:54.353729  682995 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.825241ms
	I1006 14:48:54.356542  682995 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:48:54.356619  682995 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:48:54.356695  682995 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:48:54.356819  682995 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:52:54.358331  682995 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	I1006 14:52:54.358653  682995 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	I1006 14:52:54.358853  682995 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	I1006 14:52:54.358881  682995 kubeadm.go:318] 
	I1006 14:52:54.359059  682995 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 14:52:54.359298  682995 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 14:52:54.359539  682995 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 14:52:54.359760  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 14:52:54.359952  682995 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 14:52:54.360116  682995 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 14:52:54.360148  682995 kubeadm.go:318] 
	I1006 14:52:54.363033  682995 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 14:52:54.363163  682995 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:52:54.363696  682995 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 14:52:54.363761  682995 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 14:52:54.363858  682995 kubeadm.go:402] duration metric: took 8m9.979166519s to StartCluster
	I1006 14:52:54.363946  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 14:52:54.364031  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 14:52:54.392579  682995 cri.go:89] found id: ""
	I1006 14:52:54.392622  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.392631  682995 logs.go:284] No container was found matching "kube-apiserver"
	I1006 14:52:54.392638  682995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 14:52:54.392693  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 14:52:54.420188  682995 cri.go:89] found id: ""
	I1006 14:52:54.420226  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.420237  682995 logs.go:284] No container was found matching "etcd"
	I1006 14:52:54.420245  682995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 14:52:54.420299  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 14:52:54.445694  682995 cri.go:89] found id: ""
	I1006 14:52:54.445723  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.445733  682995 logs.go:284] No container was found matching "coredns"
	I1006 14:52:54.445740  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 14:52:54.445791  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 14:52:54.471923  682995 cri.go:89] found id: ""
	I1006 14:52:54.471954  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.471962  682995 logs.go:284] No container was found matching "kube-scheduler"
	I1006 14:52:54.471971  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 14:52:54.472030  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 14:52:54.498805  682995 cri.go:89] found id: ""
	I1006 14:52:54.498836  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.498848  682995 logs.go:284] No container was found matching "kube-proxy"
	I1006 14:52:54.498857  682995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 14:52:54.498922  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 14:52:54.524613  682995 cri.go:89] found id: ""
	I1006 14:52:54.524638  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.524646  682995 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 14:52:54.524652  682995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 14:52:54.524708  682995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 14:52:54.551140  682995 cri.go:89] found id: ""
	I1006 14:52:54.551170  682995 logs.go:282] 0 containers: []
	W1006 14:52:54.551181  682995 logs.go:284] No container was found matching "kindnet"
	I1006 14:52:54.551194  682995 logs.go:123] Gathering logs for CRI-O ...
	I1006 14:52:54.551220  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 14:52:54.615573  682995 logs.go:123] Gathering logs for container status ...
	I1006 14:52:54.615607  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 14:52:54.645703  682995 logs.go:123] Gathering logs for kubelet ...
	I1006 14:52:54.645732  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 14:52:54.709506  682995 logs.go:123] Gathering logs for dmesg ...
	I1006 14:52:54.709543  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 14:52:54.722963  682995 logs.go:123] Gathering logs for describe nodes ...
	I1006 14:52:54.722997  682995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 14:52:54.783016  682995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 14:52:54.774940    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.776283    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.777585    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.778053    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:52:54.779590    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1006 14:52:54.783054  682995 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 14:52:54.783107  682995 out.go:285] * 
	W1006 14:52:54.783182  682995 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.783200  682995 out.go:285] * 
	W1006 14:52:54.785658  682995 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 14:52:54.789273  682995 out.go:203] 
	W1006 14:52:54.790573  682995 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.825241ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001082251s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001136686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001070627s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 14:52:54.790604  682995 out.go:285] * 
	I1006 14:52:54.791821  682995 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 14:55:20 ha-481559 crio[777]: time="2025-10-06T14:55:20.248778146Z" level=info msg="createCtr: removing container d5363c9de6dcae0be0af7cf75c888d7675f0b4f54a7825076382c8dc45623594" id=2915b120-42a8-414b-a2ad-d45f3daa81ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:20 ha-481559 crio[777]: time="2025-10-06T14:55:20.248815689Z" level=info msg="createCtr: deleting container d5363c9de6dcae0be0af7cf75c888d7675f0b4f54a7825076382c8dc45623594 from storage" id=2915b120-42a8-414b-a2ad-d45f3daa81ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:20 ha-481559 crio[777]: time="2025-10-06T14:55:20.250975685Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=2915b120-42a8-414b-a2ad-d45f3daa81ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.222343942Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d3faabe5-3e05-4a0d-a77d-cea625e0efde name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.223228731Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=9e6a7c11-f6cd-4ee3-b516-a6197c64cf8b name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.224164048Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-481559/kube-scheduler" id=69d896dd-3369-47d6-b61b-7e6979286ec2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.224404475Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.227669018Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.228080126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.24352531Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=69d896dd-3369-47d6-b61b-7e6979286ec2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.244947586Z" level=info msg="createCtr: deleting container ID 648df8e460bd6d4294793563cfec4e688534a4ad87449c157fe9c0ac46d3b2b0 from idIndex" id=69d896dd-3369-47d6-b61b-7e6979286ec2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.244994491Z" level=info msg="createCtr: removing container 648df8e460bd6d4294793563cfec4e688534a4ad87449c157fe9c0ac46d3b2b0" id=69d896dd-3369-47d6-b61b-7e6979286ec2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.24503395Z" level=info msg="createCtr: deleting container 648df8e460bd6d4294793563cfec4e688534a4ad87449c157fe9c0ac46d3b2b0 from storage" id=69d896dd-3369-47d6-b61b-7e6979286ec2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:25 ha-481559 crio[777]: time="2025-10-06T14:55:25.247459358Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=69d896dd-3369-47d6-b61b-7e6979286ec2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:30 ha-481559 crio[777]: time="2025-10-06T14:55:30.221944262Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=47b82f1a-6519-46d3-a04d-906fc3cb70e9 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:55:30 ha-481559 crio[777]: time="2025-10-06T14:55:30.222867662Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=08f4e805-89b2-4da7-a94a-11f74b27dd86 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 14:55:30 ha-481559 crio[777]: time="2025-10-06T14:55:30.223891414Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-481559/kube-apiserver" id=84745903-113f-47e3-ae74-bc4bc513201a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:30 ha-481559 crio[777]: time="2025-10-06T14:55:30.224120894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:55:30 ha-481559 crio[777]: time="2025-10-06T14:55:30.227661355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:55:30 ha-481559 crio[777]: time="2025-10-06T14:55:30.228080477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 14:55:30 ha-481559 crio[777]: time="2025-10-06T14:55:30.242528819Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=84745903-113f-47e3-ae74-bc4bc513201a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:30 ha-481559 crio[777]: time="2025-10-06T14:55:30.243967991Z" level=info msg="createCtr: deleting container ID c00ddedeb3fe3bd2bf30a81e6f2b06c54fdd965817589c6168999c38fab68f61 from idIndex" id=84745903-113f-47e3-ae74-bc4bc513201a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:30 ha-481559 crio[777]: time="2025-10-06T14:55:30.244016654Z" level=info msg="createCtr: removing container c00ddedeb3fe3bd2bf30a81e6f2b06c54fdd965817589c6168999c38fab68f61" id=84745903-113f-47e3-ae74-bc4bc513201a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:30 ha-481559 crio[777]: time="2025-10-06T14:55:30.244057784Z" level=info msg="createCtr: deleting container c00ddedeb3fe3bd2bf30a81e6f2b06c54fdd965817589c6168999c38fab68f61 from storage" id=84745903-113f-47e3-ae74-bc4bc513201a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 14:55:30 ha-481559 crio[777]: time="2025-10-06T14:55:30.246520651Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-481559_kube-system_b4e1cca8a09d3789a7e0862428dfe0db_0" id=84745903-113f-47e3-ae74-bc4bc513201a name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 14:55:30.917969    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:55:30.918612    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:55:30.920176    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:55:30.920699    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 14:55:30.922333    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 14:55:30 up  5:37,  0 user,  load average: 0.44, 0.18, 0.18
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 14:55:20 ha-481559 kubelet[1985]:         container etcd start failed in pod etcd-ha-481559_kube-system(520c6060936b1c2aac479c99ed6c0355): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:55:20 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:55:20 ha-481559 kubelet[1985]: E1006 14:55:20.251493    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	Oct 06 14:55:24 ha-481559 kubelet[1985]: E1006 14:55:24.249120    1985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	Oct 06 14:55:24 ha-481559 kubelet[1985]: E1006 14:55:24.870054    1985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 14:55:25 ha-481559 kubelet[1985]: I1006 14:55:25.055604    1985 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 14:55:25 ha-481559 kubelet[1985]: E1006 14:55:25.056041    1985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 14:55:25 ha-481559 kubelet[1985]: E1006 14:55:25.221848    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:55:25 ha-481559 kubelet[1985]: E1006 14:55:25.247799    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:55:25 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:55:25 ha-481559 kubelet[1985]:  > podSandboxID="28815a6c32deaa458111079bbac61f47b8e22f338f2282fab7d62077c8b07f1e"
	Oct 06 14:55:25 ha-481559 kubelet[1985]: E1006 14:55:25.247905    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:55:25 ha-481559 kubelet[1985]:         container kube-scheduler start failed in pod kube-scheduler-ha-481559_kube-system(cc93cb8d89afaa943672c70952b45174): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:55:25 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:55:25 ha-481559 kubelet[1985]: E1006 14:55:25.247936    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-481559" podUID="cc93cb8d89afaa943672c70952b45174"
	Oct 06 14:55:29 ha-481559 kubelet[1985]: E1006 14:55:29.045160    1985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bee56630f6256  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,LastTimestamp:2025-10-06 14:48:54.214861398 +0000 UTC m=+0.361990569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 14:55:29 ha-481559 kubelet[1985]: E1006 14:55:29.079805    1985 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 06 14:55:30 ha-481559 kubelet[1985]: E1006 14:55:30.221424    1985 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 14:55:30 ha-481559 kubelet[1985]: E1006 14:55:30.246850    1985 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 14:55:30 ha-481559 kubelet[1985]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:55:30 ha-481559 kubelet[1985]:  > podSandboxID="cadd804367d6dcdba2fb49fe06e3c1db8b35e6ee5c505328925ae346d4cdb867"
	Oct 06 14:55:30 ha-481559 kubelet[1985]: E1006 14:55:30.246979    1985 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 14:55:30 ha-481559 kubelet[1985]:         container kube-apiserver start failed in pod kube-apiserver-ha-481559_kube-system(b4e1cca8a09d3789a7e0862428dfe0db): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 14:55:30 ha-481559 kubelet[1985]:  > logger="UnhandledError"
	Oct 06 14:55:30 ha-481559 kubelet[1985]: E1006 14:55:30.247023    1985 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-481559" podUID="b4e1cca8a09d3789a7e0862428dfe0db"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 6 (314.360551ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:55:31.309700  695997 status.go:458] kubeconfig endpoint: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-481559 stop --alsologtostderr -v 5: (1.205279948s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 start --wait true --alsologtostderr -v 5
E1006 15:00:30.512603  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 start --wait true --alsologtostderr -v 5: exit status 80 (6m7.391155824s)

                                                
                                                
-- stdout --
	* [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:55:32.625450  696361 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:55:32.625699  696361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:55:32.625708  696361 out.go:374] Setting ErrFile to fd 2...
	I1006 14:55:32.625712  696361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:55:32.625887  696361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:55:32.626365  696361 out.go:368] Setting JSON to false
	I1006 14:55:32.627324  696361 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":20269,"bootTime":1759742264,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:55:32.627441  696361 start.go:140] virtualization: kvm guest
	I1006 14:55:32.629359  696361 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:55:32.630682  696361 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:55:32.630681  696361 notify.go:220] Checking for updates...
	I1006 14:55:32.632684  696361 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:55:32.633920  696361 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:55:32.635038  696361 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:55:32.635990  696361 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:55:32.636965  696361 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:55:32.638369  696361 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:55:32.638498  696361 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:55:32.662312  696361 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:55:32.662403  696361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:55:32.719438  696361 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:55:32.709294788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:55:32.719550  696361 docker.go:318] overlay module found
	I1006 14:55:32.721174  696361 out.go:179] * Using the docker driver based on existing profile
	I1006 14:55:32.722228  696361 start.go:304] selected driver: docker
	I1006 14:55:32.722242  696361 start.go:924] validating driver "docker" against &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:55:32.722316  696361 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:55:32.722398  696361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:55:32.778099  696361 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:55:32.768235461 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:55:32.778829  696361 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:55:32.778865  696361 cni.go:84] Creating CNI manager for ""
	I1006 14:55:32.778913  696361 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:55:32.778963  696361 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1006 14:55:32.780704  696361 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:55:32.781770  696361 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:55:32.782811  696361 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:55:32.783693  696361 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:55:32.783726  696361 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:55:32.783724  696361 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:55:32.783743  696361 cache.go:58] Caching tarball of preloaded images
	I1006 14:55:32.783836  696361 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:55:32.783847  696361 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:55:32.783950  696361 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:55:32.804191  696361 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:55:32.804233  696361 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:55:32.804253  696361 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:55:32.804278  696361 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:55:32.804339  696361 start.go:364] duration metric: took 38.329µs to acquireMachinesLock for "ha-481559"
	I1006 14:55:32.804358  696361 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:55:32.804363  696361 fix.go:54] fixHost starting: 
	I1006 14:55:32.804593  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:32.821756  696361 fix.go:112] recreateIfNeeded on ha-481559: state=Stopped err=<nil>
	W1006 14:55:32.821781  696361 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:55:32.823475  696361 out.go:252] * Restarting existing docker container for "ha-481559" ...
	I1006 14:55:32.823539  696361 cli_runner.go:164] Run: docker start ha-481559
	I1006 14:55:33.064065  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:33.082711  696361 kic.go:430] container "ha-481559" state is running.
	I1006 14:55:33.083092  696361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:55:33.102599  696361 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:55:33.102818  696361 machine.go:93] provisionDockerMachine start ...
	I1006 14:55:33.102885  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:33.121902  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:33.122245  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:33.122265  696361 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:55:33.122961  696361 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35020->127.0.0.1:32888: read: connection reset by peer
	I1006 14:55:36.268055  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:55:36.268107  696361 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:55:36.268177  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:36.286749  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:36.287029  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:36.287044  696361 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:55:36.438131  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:55:36.438276  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:36.455780  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:36.455989  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:36.456006  696361 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:55:36.598528  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:55:36.598558  696361 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:55:36.598594  696361 ubuntu.go:190] setting up certificates
	I1006 14:55:36.598608  696361 provision.go:84] configureAuth start
	I1006 14:55:36.598671  696361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:55:36.615965  696361 provision.go:143] copyHostCerts
	I1006 14:55:36.616004  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:55:36.616065  696361 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:55:36.616086  696361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:55:36.616175  696361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:55:36.616305  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:55:36.616337  696361 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:55:36.616347  696361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:55:36.616392  696361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:55:36.616465  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:55:36.616495  696361 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:55:36.616506  696361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:55:36.616549  696361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:55:36.616693  696361 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:55:36.950020  696361 provision.go:177] copyRemoteCerts
	I1006 14:55:36.950096  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:55:36.950140  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:36.967901  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.069642  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:55:37.069695  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:55:37.087171  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:55:37.087278  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:55:37.104388  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:55:37.104471  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:55:37.121024  696361 provision.go:87] duration metric: took 522.404021ms to configureAuth
	I1006 14:55:37.121046  696361 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:55:37.121222  696361 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:55:37.121328  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.139234  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:37.139495  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:37.139522  696361 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:55:37.394808  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:55:37.394835  696361 machine.go:96] duration metric: took 4.292002113s to provisionDockerMachine
	I1006 14:55:37.394849  696361 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:55:37.394860  696361 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:55:37.394929  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:55:37.394973  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.413054  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.514362  696361 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:55:37.517813  696361 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:55:37.517836  696361 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:55:37.517847  696361 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:55:37.517906  696361 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:55:37.518019  696361 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:55:37.518030  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:55:37.518152  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:55:37.525401  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:55:37.541908  696361 start.go:296] duration metric: took 147.043607ms for postStartSetup
	I1006 14:55:37.541980  696361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:55:37.542026  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.559403  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.657540  696361 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:55:37.662107  696361 fix.go:56] duration metric: took 4.857735821s for fixHost
	I1006 14:55:37.662133  696361 start.go:83] releasing machines lock for "ha-481559", held for 4.857782629s
	I1006 14:55:37.662199  696361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:55:37.679712  696361 ssh_runner.go:195] Run: cat /version.json
	I1006 14:55:37.679736  696361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:55:37.679759  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.679787  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.697300  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.697564  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.851243  696361 ssh_runner.go:195] Run: systemctl --version
	I1006 14:55:37.857782  696361 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:55:37.892065  696361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:55:37.896595  696361 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:55:37.896653  696361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:55:37.904304  696361 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:55:37.904326  696361 start.go:495] detecting cgroup driver to use...
	I1006 14:55:37.904354  696361 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:55:37.904388  696361 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:55:37.918633  696361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:55:37.929951  696361 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:55:37.930003  696361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:55:37.943242  696361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:55:37.954619  696361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:55:38.026399  696361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:55:38.105961  696361 docker.go:234] disabling docker service ...
	I1006 14:55:38.106042  696361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:55:38.120803  696361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:55:38.132404  696361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:55:38.209222  696361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:55:38.289009  696361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:55:38.301313  696361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:55:38.315068  696361 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:55:38.315130  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.323823  696361 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:55:38.323882  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.332351  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.340690  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.349706  696361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:55:38.357352  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.365990  696361 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.374123  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.382364  696361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:55:38.389293  696361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:55:38.396102  696361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:55:38.474259  696361 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:55:38.579652  696361 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:55:38.579712  696361 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:55:38.583658  696361 start.go:563] Will wait 60s for crictl version
	I1006 14:55:38.583711  696361 ssh_runner.go:195] Run: which crictl
	I1006 14:55:38.587093  696361 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:55:38.611002  696361 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:55:38.611081  696361 ssh_runner.go:195] Run: crio --version
	I1006 14:55:38.639866  696361 ssh_runner.go:195] Run: crio --version
	I1006 14:55:38.670329  696361 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:55:38.671337  696361 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:55:38.687899  696361 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:55:38.691971  696361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:55:38.702038  696361 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:55:38.702130  696361 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:55:38.702176  696361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:55:38.734706  696361 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:55:38.734729  696361 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:55:38.734788  696361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:55:38.761257  696361 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:55:38.761292  696361 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:55:38.761302  696361 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:55:38.761450  696361 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:55:38.761537  696361 ssh_runner.go:195] Run: crio config
	I1006 14:55:38.806722  696361 cni.go:84] Creating CNI manager for ""
	I1006 14:55:38.806741  696361 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:55:38.806764  696361 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:55:38.806790  696361 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:55:38.806983  696361 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:55:38.807055  696361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:55:38.815286  696361 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:55:38.815345  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:55:38.822791  696361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:55:38.834974  696361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:55:38.846564  696361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:55:38.858492  696361 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:55:38.861799  696361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:55:38.871288  696361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:55:38.948793  696361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:55:38.968510  696361 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:55:38.968530  696361 certs.go:195] generating shared ca certs ...
	I1006 14:55:38.968554  696361 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:38.968714  696361 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:55:38.968769  696361 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:55:38.968783  696361 certs.go:257] generating profile certs ...
	I1006 14:55:38.968919  696361 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:55:38.968957  696361 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6
	I1006 14:55:38.968987  696361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1006 14:55:39.196280  696361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6 ...
	I1006 14:55:39.196312  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6: {Name:mk7f459b7d525b4f442071bb9a0260205e39346a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.196490  696361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6 ...
	I1006 14:55:39.196502  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6: {Name:mk65b5fd8a8b6c5132068a16e7b4588d296da51b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.196576  696361 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:55:39.196721  696361 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:55:39.196852  696361 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:55:39.196869  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:55:39.196882  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:55:39.196896  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:55:39.196912  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:55:39.196924  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:55:39.196934  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:55:39.196944  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:55:39.196954  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:55:39.197000  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:55:39.197029  696361 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:55:39.197040  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:55:39.197063  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:55:39.197090  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:55:39.197112  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:55:39.197153  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:55:39.197178  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.197233  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.197261  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.197782  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:55:39.216503  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:55:39.233130  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:55:39.249758  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:55:39.266471  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1006 14:55:39.282976  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 14:55:39.299460  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:55:39.316017  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:55:39.332799  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:55:39.349599  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:55:39.366033  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:55:39.382453  696361 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:55:39.394283  696361 ssh_runner.go:195] Run: openssl version
	I1006 14:55:39.400262  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:55:39.408362  696361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.411864  696361 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.411906  696361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.445875  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:55:39.453513  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:55:39.462629  696361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.466768  696361 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.466821  696361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.509791  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:55:39.520128  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:55:39.530496  696361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.534149  696361 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.534196  696361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.568028  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:55:39.575602  696361 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:55:39.579372  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:55:39.612721  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:55:39.646791  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:55:39.679847  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:55:39.713200  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:55:39.748057  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:55:39.783317  696361 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:55:39.783412  696361 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:55:39.783490  696361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:55:39.811664  696361 cri.go:89] found id: ""
	I1006 14:55:39.811742  696361 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:55:39.819581  696361 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:55:39.819601  696361 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:55:39.819653  696361 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:55:39.826854  696361 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:55:39.827270  696361 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:55:39.827381  696361 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-626179/kubeconfig needs updating (will repair): [kubeconfig missing "ha-481559" cluster setting kubeconfig missing "ha-481559" context setting]
	I1006 14:55:39.827726  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.828320  696361 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:55:39.828780  696361 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 14:55:39.828793  696361 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 14:55:39.828799  696361 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 14:55:39.828802  696361 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 14:55:39.828805  696361 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 14:55:39.828865  696361 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1006 14:55:39.829225  696361 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:55:39.836565  696361 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1006 14:55:39.836593  696361 kubeadm.go:601] duration metric: took 16.98578ms to restartPrimaryControlPlane
	I1006 14:55:39.836602  696361 kubeadm.go:402] duration metric: took 53.297464ms to StartCluster
	I1006 14:55:39.836618  696361 settings.go:142] acquiring lock: {Name:mk49b10f71f24d1f54d5c453b3b04e717e9a9100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.836679  696361 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:55:39.837293  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.837551  696361 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:55:39.837640  696361 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 14:55:39.837721  696361 addons.go:69] Setting storage-provisioner=true in profile "ha-481559"
	I1006 14:55:39.837737  696361 addons.go:238] Setting addon storage-provisioner=true in "ha-481559"
	I1006 14:55:39.837742  696361 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:55:39.837756  696361 addons.go:69] Setting default-storageclass=true in profile "ha-481559"
	I1006 14:55:39.837792  696361 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-481559"
	I1006 14:55:39.837774  696361 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:55:39.838098  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:39.838222  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:39.840891  696361 out.go:179] * Verifying Kubernetes components...
	I1006 14:55:39.841917  696361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:55:39.856657  696361 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:55:39.857025  696361 addons.go:238] Setting addon default-storageclass=true in "ha-481559"
	I1006 14:55:39.857071  696361 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:55:39.857581  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:39.858971  696361 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:55:39.860226  696361 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:55:39.860245  696361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:55:39.860299  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:39.882582  696361 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:55:39.882610  696361 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:55:39.882675  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:39.884044  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:39.900943  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:39.945526  696361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:55:39.958317  696361 node_ready.go:35] waiting up to 6m0s for node "ha-481559" to be "Ready" ...
	I1006 14:55:39.992150  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:55:40.007286  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:40.047812  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.047859  696361 retry.go:31] will retry after 364.057024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:40.064751  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.064789  696361 retry.go:31] will retry after 327.571737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.393452  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:55:40.413056  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:40.448723  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.448759  696361 retry.go:31] will retry after 403.141628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:40.468798  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.468834  696361 retry.go:31] will retry after 276.4293ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.746367  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:40.802524  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.802559  696361 retry.go:31] will retry after 311.376172ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.852754  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:40.906981  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.907021  696361 retry.go:31] will retry after 474.24301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.114995  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:41.170001  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.170049  696361 retry.go:31] will retry after 897.092965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.382425  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:41.437366  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.437397  696361 retry.go:31] will retry after 965.167019ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:41.958939  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:42.068134  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:42.122887  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:42.122925  696361 retry.go:31] will retry after 947.959168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:42.403332  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:42.457238  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:42.457275  696361 retry.go:31] will retry after 1.650071235s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:43.071967  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:43.125956  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:43.125994  696361 retry.go:31] will retry after 2.176788338s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:44.108266  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:44.161384  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:44.161417  696361 retry.go:31] will retry after 2.544730451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:44.459252  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:45.304030  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:45.359630  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:45.359670  696361 retry.go:31] will retry after 2.25019711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:46.459340  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:46.706682  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:46.759581  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:46.759615  696361 retry.go:31] will retry after 2.522056071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:47.610733  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:47.664269  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:47.664306  696361 retry.go:31] will retry after 4.640766085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:48.959157  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:49.282628  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:49.336384  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:49.336418  696361 retry.go:31] will retry after 5.673676228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:51.459087  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:52.305382  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:52.359321  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:52.359361  696361 retry.go:31] will retry after 9.481577286s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:53.959083  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:55.010721  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:55.065131  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:55.065171  696361 retry.go:31] will retry after 3.836963062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:56.459045  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:58.902488  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:58.955901  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:58.955935  696361 retry.go:31] will retry after 5.927536984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:58.959353  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:01.459047  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:01.841474  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:01.898521  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:01.898557  696361 retry.go:31] will retry after 4.904827501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:03.958922  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:04.884501  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:56:04.939279  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:04.939317  696361 retry.go:31] will retry after 7.40875545s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:05.959924  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:06.804327  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:06.857900  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:06.857929  696361 retry.go:31] will retry after 19.104468711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:08.458883  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:10.459161  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:12.348374  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:56:12.402365  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:12.402403  696361 retry.go:31] will retry after 18.378132313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:12.959096  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:15.458930  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:17.459924  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:19.959250  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:22.459052  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:24.958967  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:25.962990  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:26.017478  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:26.017517  696361 retry.go:31] will retry after 29.077614598s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:26.959419  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:28.959649  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:30.781291  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:56:30.836228  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:30.836263  696361 retry.go:31] will retry after 39.344728119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:30.959929  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:33.459024  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:35.959931  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:38.459088  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:40.959025  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:43.459871  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:45.959871  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:48.460091  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:50.959024  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:52.959070  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:55.096159  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:55.150330  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:55.150378  696361 retry.go:31] will retry after 28.420260342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:55.459257  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:57.959372  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:59.959551  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:01.959698  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:04.459881  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:06.959832  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:08.959887  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:57:10.181504  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:57:10.237344  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:57:10.237488  696361 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1006 14:57:11.459330  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:13.959081  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:15.959422  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:18.459013  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:20.459598  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:22.959317  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:57:23.571775  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:57:23.627058  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:57:23.627201  696361 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:57:23.629381  696361 out.go:179] * Enabled addons: 
	I1006 14:57:23.630438  696361 addons.go:514] duration metric: took 1m43.792792491s for enable addons: enabled=[]
	W1006 14:57:25.459171  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:27.959455  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:30.459027  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:32.459536  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:34.959010  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:36.959307  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:39.458947  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:41.459099  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:43.459520  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:45.958920  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:47.959333  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:49.959783  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:52.459003  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:54.459607  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:56.958916  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:58.959167  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:00.959819  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:03.459304  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:05.459888  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:07.959107  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:09.959766  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:12.459369  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:14.959038  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:17.459044  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:19.459387  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:21.958996  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:23.959805  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:26.459119  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:28.958951  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:30.959277  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:33.458997  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:35.459658  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:37.959015  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:39.959243  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:42.458930  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:44.959887  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:47.459910  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:49.959141  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:51.959846  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:54.459248  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:56.459883  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:58.959838  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:01.459249  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:03.459705  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:05.959114  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:07.959463  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:10.459041  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:12.459597  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:14.958979  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:16.959051  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:19.458975  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:21.459642  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:23.959276  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:26.458942  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:28.459065  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:30.459520  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:32.959342  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:35.458938  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:37.459167  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:39.459850  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:41.958991  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:43.959098  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:45.959838  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:48.459752  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:50.959541  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:53.459625  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:55.959176  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:57.959808  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:59.959875  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:02.459262  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:04.958980  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:06.959159  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:08.959328  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:11.459054  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:13.459614  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:15.959129  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:18.459016  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:20.459698  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:22.959471  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:25.459149  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:27.459732  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:29.959613  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:32.459295  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:34.959099  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:36.959423  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:39.458943  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:41.459732  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:43.959134  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:45.959809  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:48.458963  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:50.459737  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:52.959576  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:55.459157  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:57.959320  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:00.459128  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:02.459620  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:04.959253  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:07.459044  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:09.459679  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:11.959175  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:13.959922  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:16.459709  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:18.959266  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:20.959740  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:23.459313  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:25.959023  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:27.959694  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:30.459527  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:32.959353  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:35.459053  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:37.459295  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:39.459459  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:01:39.959410  696361 node_ready.go:38] duration metric: took 6m0.001052975s for node "ha-481559" to be "Ready" ...
	I1006 15:01:39.961897  696361 out.go:203] 
	W1006 15:01:39.963068  696361 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1006 15:01:39.963087  696361 out.go:285] * 
	* 
	W1006 15:01:39.964873  696361 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 15:01:39.966045  696361 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-481559 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 696563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:55:32.848872757Z",
	            "FinishedAt": "2025-10-06T14:55:31.716309888Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a18d7522c85960ccdcf70fe347e0c10a64182561d1f729321bfbf2cdfd2482d4",
	            "SandboxKey": "/var/run/docker/netns/a18d7522c859",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:f7:17:fa:b2:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "a1d09ec0db4820720a30f43507e6c86000afb21b7ea62df9051d26d4095c5091",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 2 (298.816745ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-481559 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml            │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- rollout status deployment/busybox                      │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node add --alsologtostderr -v 5                                   │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node stop m02 --alsologtostderr -v 5                              │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node start m02 --alsologtostderr -v 5                             │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node list --alsologtostderr -v 5                                  │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │                     │
	│ stop    │ ha-481559 stop --alsologtostderr -v 5                                       │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │ 06 Oct 25 14:55 UTC │
	│ start   │ ha-481559 start --wait true --alsologtostderr -v 5                          │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │                     │
	│ node    │ ha-481559 node list --alsologtostderr -v 5                                  │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:55:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:55:32.625450  696361 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:55:32.625699  696361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:55:32.625708  696361 out.go:374] Setting ErrFile to fd 2...
	I1006 14:55:32.625712  696361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:55:32.625887  696361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:55:32.626365  696361 out.go:368] Setting JSON to false
	I1006 14:55:32.627324  696361 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":20269,"bootTime":1759742264,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:55:32.627441  696361 start.go:140] virtualization: kvm guest
	I1006 14:55:32.629359  696361 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:55:32.630682  696361 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:55:32.630681  696361 notify.go:220] Checking for updates...
	I1006 14:55:32.632684  696361 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:55:32.633920  696361 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:55:32.635038  696361 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:55:32.635990  696361 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:55:32.636965  696361 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:55:32.638369  696361 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:55:32.638498  696361 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:55:32.662312  696361 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:55:32.662403  696361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:55:32.719438  696361 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:55:32.709294788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:55:32.719550  696361 docker.go:318] overlay module found
	I1006 14:55:32.721174  696361 out.go:179] * Using the docker driver based on existing profile
	I1006 14:55:32.722228  696361 start.go:304] selected driver: docker
	I1006 14:55:32.722242  696361 start.go:924] validating driver "docker" against &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:55:32.722316  696361 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:55:32.722398  696361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:55:32.778099  696361 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:55:32.768235461 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:55:32.778829  696361 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:55:32.778865  696361 cni.go:84] Creating CNI manager for ""
	I1006 14:55:32.778913  696361 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:55:32.778963  696361 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1006 14:55:32.780704  696361 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:55:32.781770  696361 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:55:32.782811  696361 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:55:32.783693  696361 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:55:32.783726  696361 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:55:32.783724  696361 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:55:32.783743  696361 cache.go:58] Caching tarball of preloaded images
	I1006 14:55:32.783836  696361 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:55:32.783847  696361 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:55:32.783950  696361 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:55:32.804191  696361 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:55:32.804233  696361 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:55:32.804253  696361 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:55:32.804278  696361 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:55:32.804339  696361 start.go:364] duration metric: took 38.329µs to acquireMachinesLock for "ha-481559"
	I1006 14:55:32.804358  696361 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:55:32.804363  696361 fix.go:54] fixHost starting: 
	I1006 14:55:32.804593  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:32.821756  696361 fix.go:112] recreateIfNeeded on ha-481559: state=Stopped err=<nil>
	W1006 14:55:32.821781  696361 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:55:32.823475  696361 out.go:252] * Restarting existing docker container for "ha-481559" ...
	I1006 14:55:32.823539  696361 cli_runner.go:164] Run: docker start ha-481559
	I1006 14:55:33.064065  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:33.082711  696361 kic.go:430] container "ha-481559" state is running.
	I1006 14:55:33.083092  696361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:55:33.102599  696361 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:55:33.102818  696361 machine.go:93] provisionDockerMachine start ...
	I1006 14:55:33.102885  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:33.121902  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:33.122245  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:33.122265  696361 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:55:33.122961  696361 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35020->127.0.0.1:32888: read: connection reset by peer
	I1006 14:55:36.268055  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:55:36.268107  696361 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:55:36.268177  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:36.286749  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:36.287029  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:36.287044  696361 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:55:36.438131  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:55:36.438276  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:36.455780  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:36.455989  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:36.456006  696361 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:55:36.598528  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:55:36.598558  696361 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:55:36.598594  696361 ubuntu.go:190] setting up certificates
	I1006 14:55:36.598608  696361 provision.go:84] configureAuth start
	I1006 14:55:36.598671  696361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:55:36.615965  696361 provision.go:143] copyHostCerts
	I1006 14:55:36.616004  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:55:36.616065  696361 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:55:36.616086  696361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:55:36.616175  696361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:55:36.616305  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:55:36.616337  696361 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:55:36.616347  696361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:55:36.616392  696361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:55:36.616465  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:55:36.616495  696361 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:55:36.616506  696361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:55:36.616549  696361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:55:36.616693  696361 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:55:36.950020  696361 provision.go:177] copyRemoteCerts
	I1006 14:55:36.950096  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:55:36.950140  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:36.967901  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.069642  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:55:37.069695  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:55:37.087171  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:55:37.087278  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:55:37.104388  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:55:37.104471  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:55:37.121024  696361 provision.go:87] duration metric: took 522.404021ms to configureAuth
	I1006 14:55:37.121046  696361 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:55:37.121222  696361 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:55:37.121328  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.139234  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:37.139495  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:37.139522  696361 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:55:37.394808  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:55:37.394835  696361 machine.go:96] duration metric: took 4.292002113s to provisionDockerMachine
	I1006 14:55:37.394849  696361 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:55:37.394860  696361 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:55:37.394929  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:55:37.394973  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.413054  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.514362  696361 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:55:37.517813  696361 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:55:37.517836  696361 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:55:37.517847  696361 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:55:37.517906  696361 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:55:37.518019  696361 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:55:37.518030  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:55:37.518152  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:55:37.525401  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:55:37.541908  696361 start.go:296] duration metric: took 147.043607ms for postStartSetup
	I1006 14:55:37.541980  696361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:55:37.542026  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.559403  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.657540  696361 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:55:37.662107  696361 fix.go:56] duration metric: took 4.857735821s for fixHost
	I1006 14:55:37.662133  696361 start.go:83] releasing machines lock for "ha-481559", held for 4.857782629s
	I1006 14:55:37.662199  696361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:55:37.679712  696361 ssh_runner.go:195] Run: cat /version.json
	I1006 14:55:37.679736  696361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:55:37.679759  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.679787  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.697300  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.697564  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.851243  696361 ssh_runner.go:195] Run: systemctl --version
	I1006 14:55:37.857782  696361 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:55:37.892065  696361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:55:37.896595  696361 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:55:37.896653  696361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:55:37.904304  696361 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:55:37.904326  696361 start.go:495] detecting cgroup driver to use...
	I1006 14:55:37.904354  696361 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:55:37.904388  696361 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:55:37.918633  696361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:55:37.929951  696361 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:55:37.930003  696361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:55:37.943242  696361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:55:37.954619  696361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:55:38.026399  696361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:55:38.105961  696361 docker.go:234] disabling docker service ...
	I1006 14:55:38.106042  696361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:55:38.120803  696361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:55:38.132404  696361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:55:38.209222  696361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:55:38.289009  696361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:55:38.301313  696361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:55:38.315068  696361 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:55:38.315130  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.323823  696361 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:55:38.323882  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.332351  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.340690  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.349706  696361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:55:38.357352  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.365990  696361 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.374123  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.382364  696361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:55:38.389293  696361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:55:38.396102  696361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:55:38.474259  696361 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:55:38.579652  696361 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:55:38.579712  696361 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:55:38.583658  696361 start.go:563] Will wait 60s for crictl version
	I1006 14:55:38.583711  696361 ssh_runner.go:195] Run: which crictl
	I1006 14:55:38.587093  696361 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:55:38.611002  696361 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:55:38.611081  696361 ssh_runner.go:195] Run: crio --version
	I1006 14:55:38.639866  696361 ssh_runner.go:195] Run: crio --version
	I1006 14:55:38.670329  696361 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:55:38.671337  696361 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:55:38.687899  696361 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:55:38.691971  696361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:55:38.702038  696361 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:55:38.702130  696361 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:55:38.702176  696361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:55:38.734706  696361 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:55:38.734729  696361 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:55:38.734788  696361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:55:38.761257  696361 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:55:38.761292  696361 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:55:38.761302  696361 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:55:38.761450  696361 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:55:38.761537  696361 ssh_runner.go:195] Run: crio config
	I1006 14:55:38.806722  696361 cni.go:84] Creating CNI manager for ""
	I1006 14:55:38.806741  696361 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:55:38.806764  696361 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:55:38.806790  696361 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:55:38.806983  696361 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:55:38.807055  696361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:55:38.815286  696361 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:55:38.815345  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:55:38.822791  696361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:55:38.834974  696361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:55:38.846564  696361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:55:38.858492  696361 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:55:38.861799  696361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:55:38.871288  696361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:55:38.948793  696361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:55:38.968510  696361 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:55:38.968530  696361 certs.go:195] generating shared ca certs ...
	I1006 14:55:38.968554  696361 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:38.968714  696361 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:55:38.968769  696361 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:55:38.968783  696361 certs.go:257] generating profile certs ...
	I1006 14:55:38.968919  696361 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:55:38.968957  696361 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6
	I1006 14:55:38.968987  696361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1006 14:55:39.196280  696361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6 ...
	I1006 14:55:39.196312  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6: {Name:mk7f459b7d525b4f442071bb9a0260205e39346a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.196490  696361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6 ...
	I1006 14:55:39.196502  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6: {Name:mk65b5fd8a8b6c5132068a16e7b4588d296da51b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.196576  696361 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:55:39.196721  696361 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:55:39.196852  696361 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:55:39.196869  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:55:39.196882  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:55:39.196896  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:55:39.196912  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:55:39.196924  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:55:39.196934  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:55:39.196944  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:55:39.196954  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:55:39.197000  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:55:39.197029  696361 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:55:39.197040  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:55:39.197063  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:55:39.197090  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:55:39.197112  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:55:39.197153  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:55:39.197178  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.197233  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.197261  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.197782  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:55:39.216503  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:55:39.233130  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:55:39.249758  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:55:39.266471  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1006 14:55:39.282976  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 14:55:39.299460  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:55:39.316017  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:55:39.332799  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:55:39.349599  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:55:39.366033  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:55:39.382453  696361 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:55:39.394283  696361 ssh_runner.go:195] Run: openssl version
	I1006 14:55:39.400262  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:55:39.408362  696361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.411864  696361 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.411906  696361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.445875  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:55:39.453513  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:55:39.462629  696361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.466768  696361 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.466821  696361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.509791  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:55:39.520128  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:55:39.530496  696361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.534149  696361 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.534196  696361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.568028  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:55:39.575602  696361 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:55:39.579372  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:55:39.612721  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:55:39.646791  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:55:39.679847  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:55:39.713200  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:55:39.748057  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:55:39.783317  696361 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:55:39.783412  696361 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:55:39.783490  696361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:55:39.811664  696361 cri.go:89] found id: ""
	I1006 14:55:39.811742  696361 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:55:39.819581  696361 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:55:39.819601  696361 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:55:39.819653  696361 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:55:39.826854  696361 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:55:39.827270  696361 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:55:39.827381  696361 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-626179/kubeconfig needs updating (will repair): [kubeconfig missing "ha-481559" cluster setting kubeconfig missing "ha-481559" context setting]
	I1006 14:55:39.827726  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.828320  696361 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:55:39.828780  696361 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 14:55:39.828793  696361 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 14:55:39.828799  696361 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 14:55:39.828802  696361 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 14:55:39.828805  696361 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 14:55:39.828865  696361 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1006 14:55:39.829225  696361 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:55:39.836565  696361 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1006 14:55:39.836593  696361 kubeadm.go:601] duration metric: took 16.98578ms to restartPrimaryControlPlane
	I1006 14:55:39.836602  696361 kubeadm.go:402] duration metric: took 53.297464ms to StartCluster
	I1006 14:55:39.836618  696361 settings.go:142] acquiring lock: {Name:mk49b10f71f24d1f54d5c453b3b04e717e9a9100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.836679  696361 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:55:39.837293  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.837551  696361 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:55:39.837640  696361 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 14:55:39.837721  696361 addons.go:69] Setting storage-provisioner=true in profile "ha-481559"
	I1006 14:55:39.837737  696361 addons.go:238] Setting addon storage-provisioner=true in "ha-481559"
	I1006 14:55:39.837742  696361 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:55:39.837756  696361 addons.go:69] Setting default-storageclass=true in profile "ha-481559"
	I1006 14:55:39.837792  696361 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-481559"
	I1006 14:55:39.837774  696361 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:55:39.838098  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:39.838222  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:39.840891  696361 out.go:179] * Verifying Kubernetes components...
	I1006 14:55:39.841917  696361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:55:39.856657  696361 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:55:39.857025  696361 addons.go:238] Setting addon default-storageclass=true in "ha-481559"
	I1006 14:55:39.857071  696361 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:55:39.857581  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:39.858971  696361 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:55:39.860226  696361 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:55:39.860245  696361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:55:39.860299  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:39.882582  696361 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:55:39.882610  696361 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:55:39.882675  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:39.884044  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:39.900943  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:39.945526  696361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:55:39.958317  696361 node_ready.go:35] waiting up to 6m0s for node "ha-481559" to be "Ready" ...
	I1006 14:55:39.992150  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:55:40.007286  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:40.047812  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.047859  696361 retry.go:31] will retry after 364.057024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:40.064751  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.064789  696361 retry.go:31] will retry after 327.571737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.393452  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:55:40.413056  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:40.448723  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.448759  696361 retry.go:31] will retry after 403.141628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:40.468798  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.468834  696361 retry.go:31] will retry after 276.4293ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.746367  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:40.802524  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.802559  696361 retry.go:31] will retry after 311.376172ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.852754  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:40.906981  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.907021  696361 retry.go:31] will retry after 474.24301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.114995  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:41.170001  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.170049  696361 retry.go:31] will retry after 897.092965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.382425  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:41.437366  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.437397  696361 retry.go:31] will retry after 965.167019ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:41.958939  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:42.068134  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:42.122887  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:42.122925  696361 retry.go:31] will retry after 947.959168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:42.403332  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:42.457238  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:42.457275  696361 retry.go:31] will retry after 1.650071235s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:43.071967  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:43.125956  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:43.125994  696361 retry.go:31] will retry after 2.176788338s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:44.108266  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:44.161384  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:44.161417  696361 retry.go:31] will retry after 2.544730451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:44.459252  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:45.304030  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:45.359630  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:45.359670  696361 retry.go:31] will retry after 2.25019711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:46.459340  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:46.706682  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:46.759581  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:46.759615  696361 retry.go:31] will retry after 2.522056071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:47.610733  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:47.664269  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:47.664306  696361 retry.go:31] will retry after 4.640766085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:48.959157  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:49.282628  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:49.336384  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:49.336418  696361 retry.go:31] will retry after 5.673676228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:51.459087  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:52.305382  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:52.359321  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:52.359361  696361 retry.go:31] will retry after 9.481577286s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:53.959083  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:55.010721  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:55.065131  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:55.065171  696361 retry.go:31] will retry after 3.836963062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:56.459045  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:58.902488  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:58.955901  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:58.955935  696361 retry.go:31] will retry after 5.927536984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:58.959353  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:01.459047  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:01.841474  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:01.898521  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:01.898557  696361 retry.go:31] will retry after 4.904827501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:03.958922  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:04.884501  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:56:04.939279  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:04.939317  696361 retry.go:31] will retry after 7.40875545s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:05.959924  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:06.804327  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:06.857900  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:06.857929  696361 retry.go:31] will retry after 19.104468711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:08.458883  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:10.459161  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:12.348374  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:56:12.402365  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:12.402403  696361 retry.go:31] will retry after 18.378132313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:12.959096  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:15.458930  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:17.459924  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:19.959250  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:22.459052  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:24.958967  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:25.962990  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:26.017478  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:26.017517  696361 retry.go:31] will retry after 29.077614598s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:26.959419  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:28.959649  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:30.781291  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:56:30.836228  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:30.836263  696361 retry.go:31] will retry after 39.344728119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:30.959929  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:33.459024  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:35.959931  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:38.459088  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:40.959025  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:43.459871  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:45.959871  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:48.460091  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:50.959024  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:52.959070  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:55.096159  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:55.150330  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:55.150378  696361 retry.go:31] will retry after 28.420260342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:55.459257  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:57.959372  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:59.959551  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:01.959698  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:04.459881  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:06.959832  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:08.959887  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:57:10.181504  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:57:10.237344  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:57:10.237488  696361 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1006 14:57:11.459330  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:13.959081  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:15.959422  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:18.459013  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:20.459598  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:22.959317  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:57:23.571775  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:57:23.627058  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:57:23.627201  696361 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:57:23.629381  696361 out.go:179] * Enabled addons: 
	I1006 14:57:23.630438  696361 addons.go:514] duration metric: took 1m43.792792491s for enable addons: enabled=[]
	W1006 14:57:25.459171  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:27.959455  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:30.459027  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:32.459536  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:34.959010  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:36.959307  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:39.458947  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:41.459099  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:43.459520  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:45.958920  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:47.959333  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:49.959783  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:52.459003  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:54.459607  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:56.958916  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:58.959167  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:00.959819  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:03.459304  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:05.459888  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:07.959107  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:09.959766  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:12.459369  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:14.959038  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:17.459044  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:19.459387  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:21.958996  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:23.959805  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:26.459119  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:28.958951  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:30.959277  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:33.458997  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:35.459658  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:37.959015  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:39.959243  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:42.458930  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:44.959887  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:47.459910  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:49.959141  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:51.959846  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:54.459248  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:56.459883  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:58.959838  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:01.459249  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:03.459705  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:05.959114  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:07.959463  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:10.459041  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:12.459597  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:14.958979  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:16.959051  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:19.458975  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:21.459642  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:23.959276  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:26.458942  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:28.459065  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:30.459520  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:32.959342  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:35.458938  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:37.459167  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:39.459850  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:41.958991  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:43.959098  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:45.959838  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:48.459752  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:50.959541  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:53.459625  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:55.959176  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:57.959808  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:59.959875  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:02.459262  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:04.958980  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:06.959159  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:08.959328  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:11.459054  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:13.459614  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:15.959129  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:18.459016  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:20.459698  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:22.959471  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:25.459149  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:27.459732  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:29.959613  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:32.459295  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:34.959099  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:36.959423  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:39.458943  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:41.459732  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:43.959134  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:45.959809  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:48.458963  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:50.459737  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:52.959576  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:55.459157  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:57.959320  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:00.459128  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:02.459620  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:04.959253  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:07.459044  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:09.459679  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:11.959175  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:13.959922  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:16.459709  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:18.959266  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:20.959740  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:23.459313  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:25.959023  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:27.959694  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:30.459527  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:32.959353  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:35.459053  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:37.459295  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:39.459459  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:01:39.959410  696361 node_ready.go:38] duration metric: took 6m0.001052975s for node "ha-481559" to be "Ready" ...
	I1006 15:01:39.961897  696361 out.go:203] 
	W1006 15:01:39.963068  696361 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1006 15:01:39.963087  696361 out.go:285] * 
	W1006 15:01:39.964873  696361 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 15:01:39.966045  696361 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 15:01:36 ha-481559 crio[517]: time="2025-10-06T15:01:36.082338227Z" level=info msg="createCtr: removing container 4b0059d47974523db945260c7699bd64dcbac10416cbee33980dd26f72a0e19c" id=d64e5b5a-11a9-4195-8099-7043018fdf73 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:36 ha-481559 crio[517]: time="2025-10-06T15:01:36.08238042Z" level=info msg="createCtr: deleting container 4b0059d47974523db945260c7699bd64dcbac10416cbee33980dd26f72a0e19c from storage" id=d64e5b5a-11a9-4195-8099-7043018fdf73 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:36 ha-481559 crio[517]: time="2025-10-06T15:01:36.084461881Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=d64e5b5a-11a9-4195-8099-7043018fdf73 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.055937244Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=3c466103-7198-402a-bd3d-953674f2632d name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.056852651Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=f0d9a52b-3192-43f3-bf76-09661ec8881f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.057807142Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-481559/kube-apiserver" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.058066903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.06256213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.062985363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.075143902Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.076547924Z" level=info msg="createCtr: deleting container ID e0ac293b8abce3efed59ab591b6602d699b7d3400daa4a8fb2e55c69386948c1 from idIndex" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.076587388Z" level=info msg="createCtr: removing container e0ac293b8abce3efed59ab591b6602d699b7d3400daa4a8fb2e55c69386948c1" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.076627777Z" level=info msg="createCtr: deleting container e0ac293b8abce3efed59ab591b6602d699b7d3400daa4a8fb2e55c69386948c1 from storage" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.078851699Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-481559_kube-system_b4e1cca8a09d3789a7e0862428dfe0db_0" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.055701382Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=f4a276b2-0e7f-4319-bcd6-09688ac1a5f7 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.056472784Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=4079efbd-8599-47d5-9e39-74a586381eb6 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.057301747Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-481559/kube-scheduler" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.057531285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.060907945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.061374158Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.076972344Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.078291327Z" level=info msg="createCtr: deleting container ID b963866bf433e8809f16886d1bda881f3ca7fa54ed1407283fefec553169d089 from idIndex" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.078328192Z" level=info msg="createCtr: removing container b963866bf433e8809f16886d1bda881f3ca7fa54ed1407283fefec553169d089" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.078383536Z" level=info msg="createCtr: deleting container b963866bf433e8809f16886d1bda881f3ca7fa54ed1407283fefec553169d089 from storage" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.080448731Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 15:01:40.956044    2019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:01:40.956675    2019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:01:40.958152    2019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:01:40.958618    2019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:01:40.959951    2019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 15:01:40 up  5:43,  0 user,  load average: 0.01, 0.10, 0.15
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 15:01:36 ha-481559 kubelet[669]: E1006 15:01:36.055954     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 15:01:36 ha-481559 kubelet[669]: E1006 15:01:36.084772     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:01:36 ha-481559 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:36 ha-481559 kubelet[669]:  > podSandboxID="54e59bb74d6ee190f4df1b8cc3ff75360e4a5a7127945ed719ef9cf185de6a07"
	Oct 06 15:01:36 ha-481559 kubelet[669]: E1006 15:01:36.084885     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:01:36 ha-481559 kubelet[669]:         container etcd start failed in pod etcd-ha-481559_kube-system(520c6060936b1c2aac479c99ed6c0355): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:36 ha-481559 kubelet[669]:  > logger="UnhandledError"
	Oct 06 15:01:36 ha-481559 kubelet[669]: E1006 15:01:36.084930     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	Oct 06 15:01:37 ha-481559 kubelet[669]: E1006 15:01:37.055476     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 15:01:37 ha-481559 kubelet[669]: E1006 15:01:37.079140     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:01:37 ha-481559 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:37 ha-481559 kubelet[669]:  > podSandboxID="68f64753e0c9bc4241bd77357a937ef42a17b59c9ee0eba280403f1335b5cc1e"
	Oct 06 15:01:37 ha-481559 kubelet[669]: E1006 15:01:37.079287     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:01:37 ha-481559 kubelet[669]:         container kube-apiserver start failed in pod kube-apiserver-ha-481559_kube-system(b4e1cca8a09d3789a7e0862428dfe0db): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:37 ha-481559 kubelet[669]:  > logger="UnhandledError"
	Oct 06 15:01:37 ha-481559 kubelet[669]: E1006 15:01:37.079323     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-481559" podUID="b4e1cca8a09d3789a7e0862428dfe0db"
	Oct 06 15:01:39 ha-481559 kubelet[669]: E1006 15:01:39.055306     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 15:01:39 ha-481559 kubelet[669]: E1006 15:01:39.070227     669 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	Oct 06 15:01:39 ha-481559 kubelet[669]: E1006 15:01:39.080716     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:01:39 ha-481559 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:39 ha-481559 kubelet[669]:  > podSandboxID="3b7ddccf443b7c3df7fe7a1aafb38b39a777788c5f30f29647a93377ee88f8e0"
	Oct 06 15:01:39 ha-481559 kubelet[669]: E1006 15:01:39.080803     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:01:39 ha-481559 kubelet[669]:         container kube-scheduler start failed in pod kube-scheduler-ha-481559_kube-system(cc93cb8d89afaa943672c70952b45174): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:39 ha-481559 kubelet[669]:  > logger="UnhandledError"
	Oct 06 15:01:39 ha-481559 kubelet[669]: E1006 15:01:39.080836     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-481559" podUID="cc93cb8d89afaa943672c70952b45174"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 2 (298.635658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (1.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 node delete m03 --alsologtostderr -v 5: exit status 103 (250.305256ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-481559 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-481559"

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 15:01:41.392190  700453 out.go:360] Setting OutFile to fd 1 ...
	I1006 15:01:41.392444  700453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:41.392452  700453 out.go:374] Setting ErrFile to fd 2...
	I1006 15:01:41.392456  700453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:41.392685  700453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 15:01:41.392982  700453 mustload.go:65] Loading cluster: ha-481559
	I1006 15:01:41.393314  700453 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:41.393693  700453 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:41.411279  700453 host.go:66] Checking if "ha-481559" exists ...
	I1006 15:01:41.411544  700453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:01:41.466825  700453 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 15:01:41.456417386 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 15:01:41.467115  700453 api_server.go:166] Checking apiserver status ...
	I1006 15:01:41.467197  700453 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 15:01:41.467287  700453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:41.485071  700453 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	W1006 15:01:41.589683  700453 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 15:01:41.592061  700453 out.go:179] * The control-plane node ha-481559 apiserver is not running: (state=Stopped)
	I1006 15:01:41.593153  700453 out.go:179]   To start a cluster, run: "minikube start -p ha-481559"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-amd64 -p ha-481559 node delete m03 --alsologtostderr -v 5": exit status 103
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5: exit status 2 (297.874199ms)

                                                
                                                
-- stdout --
	ha-481559
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 15:01:41.643049  700548 out.go:360] Setting OutFile to fd 1 ...
	I1006 15:01:41.643375  700548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:41.643386  700548 out.go:374] Setting ErrFile to fd 2...
	I1006 15:01:41.643391  700548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:41.643626  700548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 15:01:41.643818  700548 out.go:368] Setting JSON to false
	I1006 15:01:41.643848  700548 mustload.go:65] Loading cluster: ha-481559
	I1006 15:01:41.643975  700548 notify.go:220] Checking for updates...
	I1006 15:01:41.644248  700548 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:41.644267  700548 status.go:174] checking status of ha-481559 ...
	I1006 15:01:41.644691  700548 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:41.664505  700548 status.go:371] ha-481559 host status = "Running" (err=<nil>)
	I1006 15:01:41.664549  700548 host.go:66] Checking if "ha-481559" exists ...
	I1006 15:01:41.664892  700548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:41.682370  700548 host.go:66] Checking if "ha-481559" exists ...
	I1006 15:01:41.682629  700548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 15:01:41.682684  700548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:41.699999  700548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:41.800576  700548 ssh_runner.go:195] Run: systemctl --version
	I1006 15:01:41.807039  700548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 15:01:41.819387  700548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:01:41.880522  700548 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 15:01:41.87022982 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 15:01:41.881079  700548 kubeconfig.go:125] found "ha-481559" server: "https://192.168.49.2:8443"
	I1006 15:01:41.881118  700548 api_server.go:166] Checking apiserver status ...
	I1006 15:01:41.881168  700548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 15:01:41.891768  700548 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 15:01:41.891796  700548 status.go:463] ha-481559 apiserver status = Running (err=<nil>)
	I1006 15:01:41.891811  700548 status.go:176] ha-481559 status: &{Name:ha-481559 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 696563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:55:32.848872757Z",
	            "FinishedAt": "2025-10-06T14:55:31.716309888Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a18d7522c85960ccdcf70fe347e0c10a64182561d1f729321bfbf2cdfd2482d4",
	            "SandboxKey": "/var/run/docker/netns/a18d7522c859",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:f7:17:fa:b2:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "a1d09ec0db4820720a30f43507e6c86000afb21b7ea62df9051d26d4095c5091",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 2 (296.727712ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-481559 kubectl -- rollout status deployment/busybox                      │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node add --alsologtostderr -v 5                                   │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node stop m02 --alsologtostderr -v 5                              │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node start m02 --alsologtostderr -v 5                             │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node list --alsologtostderr -v 5                                  │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │                     │
	│ stop    │ ha-481559 stop --alsologtostderr -v 5                                       │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │ 06 Oct 25 14:55 UTC │
	│ start   │ ha-481559 start --wait true --alsologtostderr -v 5                          │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │                     │
	│ node    │ ha-481559 node list --alsologtostderr -v 5                                  │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	│ node    │ ha-481559 node delete m03 --alsologtostderr -v 5                            │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:55:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:55:32.625450  696361 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:55:32.625699  696361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:55:32.625708  696361 out.go:374] Setting ErrFile to fd 2...
	I1006 14:55:32.625712  696361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:55:32.625887  696361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:55:32.626365  696361 out.go:368] Setting JSON to false
	I1006 14:55:32.627324  696361 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":20269,"bootTime":1759742264,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:55:32.627441  696361 start.go:140] virtualization: kvm guest
	I1006 14:55:32.629359  696361 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:55:32.630682  696361 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:55:32.630681  696361 notify.go:220] Checking for updates...
	I1006 14:55:32.632684  696361 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:55:32.633920  696361 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:55:32.635038  696361 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:55:32.635990  696361 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:55:32.636965  696361 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:55:32.638369  696361 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:55:32.638498  696361 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:55:32.662312  696361 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:55:32.662403  696361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:55:32.719438  696361 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:55:32.709294788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:55:32.719550  696361 docker.go:318] overlay module found
	I1006 14:55:32.721174  696361 out.go:179] * Using the docker driver based on existing profile
	I1006 14:55:32.722228  696361 start.go:304] selected driver: docker
	I1006 14:55:32.722242  696361 start.go:924] validating driver "docker" against &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:55:32.722316  696361 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:55:32.722398  696361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:55:32.778099  696361 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:55:32.768235461 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:55:32.778829  696361 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:55:32.778865  696361 cni.go:84] Creating CNI manager for ""
	I1006 14:55:32.778913  696361 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:55:32.778963  696361 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1006 14:55:32.780704  696361 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:55:32.781770  696361 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:55:32.782811  696361 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:55:32.783693  696361 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:55:32.783726  696361 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:55:32.783724  696361 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:55:32.783743  696361 cache.go:58] Caching tarball of preloaded images
	I1006 14:55:32.783836  696361 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:55:32.783847  696361 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:55:32.783950  696361 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:55:32.804191  696361 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:55:32.804233  696361 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:55:32.804253  696361 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:55:32.804278  696361 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:55:32.804339  696361 start.go:364] duration metric: took 38.329µs to acquireMachinesLock for "ha-481559"
	I1006 14:55:32.804358  696361 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:55:32.804363  696361 fix.go:54] fixHost starting: 
	I1006 14:55:32.804593  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:32.821756  696361 fix.go:112] recreateIfNeeded on ha-481559: state=Stopped err=<nil>
	W1006 14:55:32.821781  696361 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:55:32.823475  696361 out.go:252] * Restarting existing docker container for "ha-481559" ...
	I1006 14:55:32.823539  696361 cli_runner.go:164] Run: docker start ha-481559
	I1006 14:55:33.064065  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:33.082711  696361 kic.go:430] container "ha-481559" state is running.
	I1006 14:55:33.083092  696361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:55:33.102599  696361 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:55:33.102818  696361 machine.go:93] provisionDockerMachine start ...
	I1006 14:55:33.102885  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:33.121902  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:33.122245  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:33.122265  696361 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:55:33.122961  696361 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35020->127.0.0.1:32888: read: connection reset by peer
	I1006 14:55:36.268055  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:55:36.268107  696361 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:55:36.268177  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:36.286749  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:36.287029  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:36.287044  696361 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:55:36.438131  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:55:36.438276  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:36.455780  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:36.455989  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:36.456006  696361 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:55:36.598528  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:55:36.598558  696361 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:55:36.598594  696361 ubuntu.go:190] setting up certificates
	I1006 14:55:36.598608  696361 provision.go:84] configureAuth start
	I1006 14:55:36.598671  696361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:55:36.615965  696361 provision.go:143] copyHostCerts
	I1006 14:55:36.616004  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:55:36.616065  696361 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:55:36.616086  696361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:55:36.616175  696361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:55:36.616305  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:55:36.616337  696361 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:55:36.616347  696361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:55:36.616392  696361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:55:36.616465  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:55:36.616495  696361 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:55:36.616506  696361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:55:36.616549  696361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:55:36.616693  696361 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:55:36.950020  696361 provision.go:177] copyRemoteCerts
	I1006 14:55:36.950096  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:55:36.950140  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:36.967901  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.069642  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:55:37.069695  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:55:37.087171  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:55:37.087278  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:55:37.104388  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:55:37.104471  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:55:37.121024  696361 provision.go:87] duration metric: took 522.404021ms to configureAuth
	I1006 14:55:37.121046  696361 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:55:37.121222  696361 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:55:37.121328  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.139234  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:37.139495  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:37.139522  696361 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:55:37.394808  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:55:37.394835  696361 machine.go:96] duration metric: took 4.292002113s to provisionDockerMachine
	I1006 14:55:37.394849  696361 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:55:37.394860  696361 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:55:37.394929  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:55:37.394973  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.413054  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.514362  696361 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:55:37.517813  696361 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:55:37.517836  696361 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:55:37.517847  696361 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:55:37.517906  696361 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:55:37.518019  696361 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:55:37.518030  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:55:37.518152  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:55:37.525401  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:55:37.541908  696361 start.go:296] duration metric: took 147.043607ms for postStartSetup
	I1006 14:55:37.541980  696361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:55:37.542026  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.559403  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.657540  696361 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:55:37.662107  696361 fix.go:56] duration metric: took 4.857735821s for fixHost
	I1006 14:55:37.662133  696361 start.go:83] releasing machines lock for "ha-481559", held for 4.857782629s
	I1006 14:55:37.662199  696361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:55:37.679712  696361 ssh_runner.go:195] Run: cat /version.json
	I1006 14:55:37.679736  696361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:55:37.679759  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.679787  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.697300  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.697564  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.851243  696361 ssh_runner.go:195] Run: systemctl --version
	I1006 14:55:37.857782  696361 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:55:37.892065  696361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:55:37.896595  696361 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:55:37.896653  696361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:55:37.904304  696361 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:55:37.904326  696361 start.go:495] detecting cgroup driver to use...
	I1006 14:55:37.904354  696361 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:55:37.904388  696361 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:55:37.918633  696361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:55:37.929951  696361 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:55:37.930003  696361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:55:37.943242  696361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:55:37.954619  696361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:55:38.026399  696361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:55:38.105961  696361 docker.go:234] disabling docker service ...
	I1006 14:55:38.106042  696361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:55:38.120803  696361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:55:38.132404  696361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:55:38.209222  696361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:55:38.289009  696361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:55:38.301313  696361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:55:38.315068  696361 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:55:38.315130  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.323823  696361 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:55:38.323882  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.332351  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.340690  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.349706  696361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:55:38.357352  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.365990  696361 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.374123  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.382364  696361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:55:38.389293  696361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:55:38.396102  696361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:55:38.474259  696361 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:55:38.579652  696361 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:55:38.579712  696361 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:55:38.583658  696361 start.go:563] Will wait 60s for crictl version
	I1006 14:55:38.583711  696361 ssh_runner.go:195] Run: which crictl
	I1006 14:55:38.587093  696361 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:55:38.611002  696361 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:55:38.611081  696361 ssh_runner.go:195] Run: crio --version
	I1006 14:55:38.639866  696361 ssh_runner.go:195] Run: crio --version
	I1006 14:55:38.670329  696361 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:55:38.671337  696361 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:55:38.687899  696361 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:55:38.691971  696361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:55:38.702038  696361 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:55:38.702130  696361 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:55:38.702176  696361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:55:38.734706  696361 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:55:38.734729  696361 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:55:38.734788  696361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:55:38.761257  696361 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:55:38.761292  696361 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:55:38.761302  696361 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:55:38.761450  696361 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:55:38.761537  696361 ssh_runner.go:195] Run: crio config
	I1006 14:55:38.806722  696361 cni.go:84] Creating CNI manager for ""
	I1006 14:55:38.806741  696361 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:55:38.806764  696361 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:55:38.806790  696361 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:55:38.806983  696361 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:55:38.807055  696361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:55:38.815286  696361 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:55:38.815345  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:55:38.822791  696361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:55:38.834974  696361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:55:38.846564  696361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:55:38.858492  696361 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:55:38.861799  696361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:55:38.871288  696361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:55:38.948793  696361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:55:38.968510  696361 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:55:38.968530  696361 certs.go:195] generating shared ca certs ...
	I1006 14:55:38.968554  696361 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:38.968714  696361 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:55:38.968769  696361 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:55:38.968783  696361 certs.go:257] generating profile certs ...
	I1006 14:55:38.968919  696361 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:55:38.968957  696361 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6
	I1006 14:55:38.968987  696361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1006 14:55:39.196280  696361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6 ...
	I1006 14:55:39.196312  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6: {Name:mk7f459b7d525b4f442071bb9a0260205e39346a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.196490  696361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6 ...
	I1006 14:55:39.196502  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6: {Name:mk65b5fd8a8b6c5132068a16e7b4588d296da51b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.196576  696361 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:55:39.196721  696361 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:55:39.196852  696361 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:55:39.196869  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:55:39.196882  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:55:39.196896  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:55:39.196912  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:55:39.196924  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:55:39.196934  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:55:39.196944  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:55:39.196954  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:55:39.197000  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:55:39.197029  696361 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:55:39.197040  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:55:39.197063  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:55:39.197090  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:55:39.197112  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:55:39.197153  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:55:39.197178  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.197233  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.197261  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.197782  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:55:39.216503  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:55:39.233130  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:55:39.249758  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:55:39.266471  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1006 14:55:39.282976  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 14:55:39.299460  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:55:39.316017  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:55:39.332799  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:55:39.349599  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:55:39.366033  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:55:39.382453  696361 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:55:39.394283  696361 ssh_runner.go:195] Run: openssl version
	I1006 14:55:39.400262  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:55:39.408362  696361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.411864  696361 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.411906  696361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.445875  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:55:39.453513  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:55:39.462629  696361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.466768  696361 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.466821  696361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.509791  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:55:39.520128  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:55:39.530496  696361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.534149  696361 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.534196  696361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.568028  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:55:39.575602  696361 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:55:39.579372  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:55:39.612721  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:55:39.646791  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:55:39.679847  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:55:39.713200  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:55:39.748057  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:55:39.783317  696361 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:55:39.783412  696361 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:55:39.783490  696361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:55:39.811664  696361 cri.go:89] found id: ""
	I1006 14:55:39.811742  696361 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:55:39.819581  696361 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:55:39.819601  696361 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:55:39.819653  696361 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:55:39.826854  696361 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:55:39.827270  696361 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:55:39.827381  696361 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-626179/kubeconfig needs updating (will repair): [kubeconfig missing "ha-481559" cluster setting kubeconfig missing "ha-481559" context setting]
	I1006 14:55:39.827726  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.828320  696361 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:55:39.828780  696361 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 14:55:39.828793  696361 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 14:55:39.828799  696361 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 14:55:39.828802  696361 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 14:55:39.828805  696361 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 14:55:39.828865  696361 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1006 14:55:39.829225  696361 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:55:39.836565  696361 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1006 14:55:39.836593  696361 kubeadm.go:601] duration metric: took 16.98578ms to restartPrimaryControlPlane
	I1006 14:55:39.836602  696361 kubeadm.go:402] duration metric: took 53.297464ms to StartCluster
	I1006 14:55:39.836618  696361 settings.go:142] acquiring lock: {Name:mk49b10f71f24d1f54d5c453b3b04e717e9a9100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.836679  696361 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:55:39.837293  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.837551  696361 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:55:39.837640  696361 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 14:55:39.837721  696361 addons.go:69] Setting storage-provisioner=true in profile "ha-481559"
	I1006 14:55:39.837737  696361 addons.go:238] Setting addon storage-provisioner=true in "ha-481559"
	I1006 14:55:39.837742  696361 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:55:39.837756  696361 addons.go:69] Setting default-storageclass=true in profile "ha-481559"
	I1006 14:55:39.837792  696361 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-481559"
	I1006 14:55:39.837774  696361 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:55:39.838098  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:39.838222  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:39.840891  696361 out.go:179] * Verifying Kubernetes components...
	I1006 14:55:39.841917  696361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:55:39.856657  696361 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:55:39.857025  696361 addons.go:238] Setting addon default-storageclass=true in "ha-481559"
	I1006 14:55:39.857071  696361 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:55:39.857581  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:39.858971  696361 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:55:39.860226  696361 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:55:39.860245  696361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:55:39.860299  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:39.882582  696361 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:55:39.882610  696361 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:55:39.882675  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:39.884044  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:39.900943  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:39.945526  696361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:55:39.958317  696361 node_ready.go:35] waiting up to 6m0s for node "ha-481559" to be "Ready" ...
	I1006 14:55:39.992150  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:55:40.007286  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:40.047812  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.047859  696361 retry.go:31] will retry after 364.057024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:40.064751  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.064789  696361 retry.go:31] will retry after 327.571737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.393452  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:55:40.413056  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:40.448723  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.448759  696361 retry.go:31] will retry after 403.141628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:40.468798  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.468834  696361 retry.go:31] will retry after 276.4293ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.746367  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:40.802524  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.802559  696361 retry.go:31] will retry after 311.376172ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.852754  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:40.906981  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.907021  696361 retry.go:31] will retry after 474.24301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.114995  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:41.170001  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.170049  696361 retry.go:31] will retry after 897.092965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.382425  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:41.437366  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.437397  696361 retry.go:31] will retry after 965.167019ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:41.958939  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:42.068134  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:42.122887  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:42.122925  696361 retry.go:31] will retry after 947.959168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:42.403332  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:42.457238  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:42.457275  696361 retry.go:31] will retry after 1.650071235s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:43.071967  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:43.125956  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:43.125994  696361 retry.go:31] will retry after 2.176788338s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:44.108266  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:44.161384  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:44.161417  696361 retry.go:31] will retry after 2.544730451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:44.459252  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:45.304030  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:45.359630  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:45.359670  696361 retry.go:31] will retry after 2.25019711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:46.459340  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:46.706682  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:46.759581  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:46.759615  696361 retry.go:31] will retry after 2.522056071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:47.610733  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:47.664269  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:47.664306  696361 retry.go:31] will retry after 4.640766085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:48.959157  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:49.282628  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:49.336384  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:49.336418  696361 retry.go:31] will retry after 5.673676228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:51.459087  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:52.305382  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:52.359321  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:52.359361  696361 retry.go:31] will retry after 9.481577286s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:53.959083  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:55.010721  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:55.065131  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:55.065171  696361 retry.go:31] will retry after 3.836963062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:56.459045  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:58.902488  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:58.955901  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:58.955935  696361 retry.go:31] will retry after 5.927536984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:58.959353  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:01.459047  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:01.841474  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:01.898521  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:01.898557  696361 retry.go:31] will retry after 4.904827501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:03.958922  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:04.884501  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:56:04.939279  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:04.939317  696361 retry.go:31] will retry after 7.40875545s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:05.959924  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:06.804327  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:06.857900  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:06.857929  696361 retry.go:31] will retry after 19.104468711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:08.458883  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:10.459161  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:12.348374  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:56:12.402365  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:12.402403  696361 retry.go:31] will retry after 18.378132313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:12.959096  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:15.458930  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:17.459924  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:19.959250  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:22.459052  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:24.958967  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:25.962990  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:26.017478  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:26.017517  696361 retry.go:31] will retry after 29.077614598s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:26.959419  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:28.959649  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:30.781291  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:56:30.836228  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:30.836263  696361 retry.go:31] will retry after 39.344728119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:30.959929  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:33.459024  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:35.959931  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:38.459088  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:40.959025  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:43.459871  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:45.959871  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:48.460091  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:50.959024  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:52.959070  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:55.096159  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:55.150330  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:55.150378  696361 retry.go:31] will retry after 28.420260342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:55.459257  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:57.959372  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:59.959551  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:01.959698  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:04.459881  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:06.959832  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:08.959887  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:57:10.181504  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:57:10.237344  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:57:10.237488  696361 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1006 14:57:11.459330  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:13.959081  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:15.959422  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:18.459013  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:20.459598  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:22.959317  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:57:23.571775  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:57:23.627058  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:57:23.627201  696361 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:57:23.629381  696361 out.go:179] * Enabled addons: 
	I1006 14:57:23.630438  696361 addons.go:514] duration metric: took 1m43.792792491s for enable addons: enabled=[]
	W1006 14:57:25.459171  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:27.959455  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:30.459027  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:32.459536  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:34.959010  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:36.959307  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:39.458947  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:41.459099  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:43.459520  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:45.958920  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:47.959333  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:49.959783  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:52.459003  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:54.459607  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:56.958916  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:58.959167  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:00.959819  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:03.459304  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:05.459888  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:07.959107  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:09.959766  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:12.459369  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:14.959038  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:17.459044  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:19.459387  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:21.958996  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:23.959805  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:26.459119  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:28.958951  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:30.959277  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:33.458997  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:35.459658  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:37.959015  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:39.959243  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:42.458930  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:44.959887  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:47.459910  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:49.959141  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:51.959846  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:54.459248  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:56.459883  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:58.959838  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:01.459249  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:03.459705  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:05.959114  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:07.959463  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:10.459041  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:12.459597  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:14.958979  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:16.959051  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:19.458975  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:21.459642  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:23.959276  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:26.458942  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:28.459065  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:30.459520  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:32.959342  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:35.458938  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:37.459167  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:39.459850  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:41.958991  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:43.959098  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:45.959838  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:48.459752  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:50.959541  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:53.459625  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:55.959176  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:57.959808  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:59.959875  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:02.459262  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:04.958980  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:06.959159  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:08.959328  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:11.459054  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:13.459614  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:15.959129  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:18.459016  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:20.459698  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:22.959471  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:25.459149  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:27.459732  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:29.959613  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:32.459295  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:34.959099  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:36.959423  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:39.458943  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:41.459732  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:43.959134  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:45.959809  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:48.458963  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:50.459737  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:52.959576  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:55.459157  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:57.959320  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:00.459128  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:02.459620  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:04.959253  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:07.459044  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:09.459679  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:11.959175  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:13.959922  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:16.459709  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:18.959266  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:20.959740  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:23.459313  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:25.959023  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:27.959694  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:30.459527  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:32.959353  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:35.459053  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:37.459295  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:39.459459  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:01:39.959410  696361 node_ready.go:38] duration metric: took 6m0.001052975s for node "ha-481559" to be "Ready" ...
	I1006 15:01:39.961897  696361 out.go:203] 
	W1006 15:01:39.963068  696361 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1006 15:01:39.963087  696361 out.go:285] * 
	W1006 15:01:39.964873  696361 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 15:01:39.966045  696361 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 15:01:36 ha-481559 crio[517]: time="2025-10-06T15:01:36.082338227Z" level=info msg="createCtr: removing container 4b0059d47974523db945260c7699bd64dcbac10416cbee33980dd26f72a0e19c" id=d64e5b5a-11a9-4195-8099-7043018fdf73 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:36 ha-481559 crio[517]: time="2025-10-06T15:01:36.08238042Z" level=info msg="createCtr: deleting container 4b0059d47974523db945260c7699bd64dcbac10416cbee33980dd26f72a0e19c from storage" id=d64e5b5a-11a9-4195-8099-7043018fdf73 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:36 ha-481559 crio[517]: time="2025-10-06T15:01:36.084461881Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=d64e5b5a-11a9-4195-8099-7043018fdf73 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.055937244Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=3c466103-7198-402a-bd3d-953674f2632d name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.056852651Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=f0d9a52b-3192-43f3-bf76-09661ec8881f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.057807142Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-481559/kube-apiserver" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.058066903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.06256213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.062985363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.075143902Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.076547924Z" level=info msg="createCtr: deleting container ID e0ac293b8abce3efed59ab591b6602d699b7d3400daa4a8fb2e55c69386948c1 from idIndex" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.076587388Z" level=info msg="createCtr: removing container e0ac293b8abce3efed59ab591b6602d699b7d3400daa4a8fb2e55c69386948c1" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.076627777Z" level=info msg="createCtr: deleting container e0ac293b8abce3efed59ab591b6602d699b7d3400daa4a8fb2e55c69386948c1 from storage" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.078851699Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-481559_kube-system_b4e1cca8a09d3789a7e0862428dfe0db_0" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.055701382Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=f4a276b2-0e7f-4319-bcd6-09688ac1a5f7 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.056472784Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=4079efbd-8599-47d5-9e39-74a586381eb6 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.057301747Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-481559/kube-scheduler" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.057531285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.060907945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.061374158Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.076972344Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.078291327Z" level=info msg="createCtr: deleting container ID b963866bf433e8809f16886d1bda881f3ca7fa54ed1407283fefec553169d089 from idIndex" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.078328192Z" level=info msg="createCtr: removing container b963866bf433e8809f16886d1bda881f3ca7fa54ed1407283fefec553169d089" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.078383536Z" level=info msg="createCtr: deleting container b963866bf433e8809f16886d1bda881f3ca7fa54ed1407283fefec553169d089 from storage" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.080448731Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 15:01:42.783434    2201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:01:42.783988    2201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:01:42.785639    2201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:01:42.786154    2201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:01:42.787571    2201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 15:01:42 up  5:43,  0 user,  load average: 0.01, 0.10, 0.15
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 15:01:36 ha-481559 kubelet[669]: E1006 15:01:36.084885     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:01:36 ha-481559 kubelet[669]:         container etcd start failed in pod etcd-ha-481559_kube-system(520c6060936b1c2aac479c99ed6c0355): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:36 ha-481559 kubelet[669]:  > logger="UnhandledError"
	Oct 06 15:01:36 ha-481559 kubelet[669]: E1006 15:01:36.084930     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	Oct 06 15:01:37 ha-481559 kubelet[669]: E1006 15:01:37.055476     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 15:01:37 ha-481559 kubelet[669]: E1006 15:01:37.079140     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:01:37 ha-481559 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:37 ha-481559 kubelet[669]:  > podSandboxID="68f64753e0c9bc4241bd77357a937ef42a17b59c9ee0eba280403f1335b5cc1e"
	Oct 06 15:01:37 ha-481559 kubelet[669]: E1006 15:01:37.079287     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:01:37 ha-481559 kubelet[669]:         container kube-apiserver start failed in pod kube-apiserver-ha-481559_kube-system(b4e1cca8a09d3789a7e0862428dfe0db): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:37 ha-481559 kubelet[669]:  > logger="UnhandledError"
	Oct 06 15:01:37 ha-481559 kubelet[669]: E1006 15:01:37.079323     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-481559" podUID="b4e1cca8a09d3789a7e0862428dfe0db"
	Oct 06 15:01:39 ha-481559 kubelet[669]: E1006 15:01:39.055306     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 15:01:39 ha-481559 kubelet[669]: E1006 15:01:39.070227     669 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	Oct 06 15:01:39 ha-481559 kubelet[669]: E1006 15:01:39.080716     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:01:39 ha-481559 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:39 ha-481559 kubelet[669]:  > podSandboxID="3b7ddccf443b7c3df7fe7a1aafb38b39a777788c5f30f29647a93377ee88f8e0"
	Oct 06 15:01:39 ha-481559 kubelet[669]: E1006 15:01:39.080803     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:01:39 ha-481559 kubelet[669]:         container kube-scheduler start failed in pod kube-scheduler-ha-481559_kube-system(cc93cb8d89afaa943672c70952b45174): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:39 ha-481559 kubelet[669]:  > logger="UnhandledError"
	Oct 06 15:01:39 ha-481559 kubelet[669]: E1006 15:01:39.080836     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-481559" podUID="cc93cb8d89afaa943672c70952b45174"
	Oct 06 15:01:41 ha-481559 kubelet[669]: E1006 15:01:41.692484     669 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 15:01:41 ha-481559 kubelet[669]: I1006 15:01:41.870779     669 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 15:01:41 ha-481559 kubelet[669]: E1006 15:01:41.871191     669 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 15:01:42 ha-481559 kubelet[669]: E1006 15:01:42.633135     669 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186beeb4a4cbaf58  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 14:55:39.044646744 +0000 UTC m=+0.073577294,LastTimestamp:2025-10-06 14:55:39.044646744 +0000 UTC m=+0.073577294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 2 (296.836215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (1.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-481559" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-481559\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-481559\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-481559\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 696563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:55:32.848872757Z",
	            "FinishedAt": "2025-10-06T14:55:31.716309888Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a18d7522c85960ccdcf70fe347e0c10a64182561d1f729321bfbf2cdfd2482d4",
	            "SandboxKey": "/var/run/docker/netns/a18d7522c859",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:f7:17:fa:b2:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "a1d09ec0db4820720a30f43507e6c86000afb21b7ea62df9051d26d4095c5091",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 2 (285.009038ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-481559 kubectl -- rollout status deployment/busybox                      │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node add --alsologtostderr -v 5                                   │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node stop m02 --alsologtostderr -v 5                              │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node start m02 --alsologtostderr -v 5                             │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node list --alsologtostderr -v 5                                  │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │                     │
	│ stop    │ ha-481559 stop --alsologtostderr -v 5                                       │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │ 06 Oct 25 14:55 UTC │
	│ start   │ ha-481559 start --wait true --alsologtostderr -v 5                          │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │                     │
	│ node    │ ha-481559 node list --alsologtostderr -v 5                                  │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	│ node    │ ha-481559 node delete m03 --alsologtostderr -v 5                            │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:55:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:55:32.625450  696361 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:55:32.625699  696361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:55:32.625708  696361 out.go:374] Setting ErrFile to fd 2...
	I1006 14:55:32.625712  696361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:55:32.625887  696361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:55:32.626365  696361 out.go:368] Setting JSON to false
	I1006 14:55:32.627324  696361 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":20269,"bootTime":1759742264,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:55:32.627441  696361 start.go:140] virtualization: kvm guest
	I1006 14:55:32.629359  696361 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:55:32.630682  696361 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:55:32.630681  696361 notify.go:220] Checking for updates...
	I1006 14:55:32.632684  696361 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:55:32.633920  696361 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:55:32.635038  696361 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:55:32.635990  696361 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:55:32.636965  696361 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:55:32.638369  696361 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:55:32.638498  696361 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:55:32.662312  696361 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:55:32.662403  696361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:55:32.719438  696361 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:55:32.709294788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:55:32.719550  696361 docker.go:318] overlay module found
	I1006 14:55:32.721174  696361 out.go:179] * Using the docker driver based on existing profile
	I1006 14:55:32.722228  696361 start.go:304] selected driver: docker
	I1006 14:55:32.722242  696361 start.go:924] validating driver "docker" against &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:55:32.722316  696361 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:55:32.722398  696361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:55:32.778099  696361 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 14:55:32.768235461 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:55:32.778829  696361 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:55:32.778865  696361 cni.go:84] Creating CNI manager for ""
	I1006 14:55:32.778913  696361 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:55:32.778963  696361 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1006 14:55:32.780704  696361 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 14:55:32.781770  696361 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 14:55:32.782811  696361 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:55:32.783693  696361 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:55:32.783726  696361 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 14:55:32.783724  696361 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:55:32.783743  696361 cache.go:58] Caching tarball of preloaded images
	I1006 14:55:32.783836  696361 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 14:55:32.783847  696361 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 14:55:32.783950  696361 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:55:32.804191  696361 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:55:32.804233  696361 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:55:32.804253  696361 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:55:32.804278  696361 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:55:32.804339  696361 start.go:364] duration metric: took 38.329µs to acquireMachinesLock for "ha-481559"
	I1006 14:55:32.804358  696361 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:55:32.804363  696361 fix.go:54] fixHost starting: 
	I1006 14:55:32.804593  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:32.821756  696361 fix.go:112] recreateIfNeeded on ha-481559: state=Stopped err=<nil>
	W1006 14:55:32.821781  696361 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:55:32.823475  696361 out.go:252] * Restarting existing docker container for "ha-481559" ...
	I1006 14:55:32.823539  696361 cli_runner.go:164] Run: docker start ha-481559
	I1006 14:55:33.064065  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:33.082711  696361 kic.go:430] container "ha-481559" state is running.
	I1006 14:55:33.083092  696361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:55:33.102599  696361 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 14:55:33.102818  696361 machine.go:93] provisionDockerMachine start ...
	I1006 14:55:33.102885  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:33.121902  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:33.122245  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:33.122265  696361 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:55:33.122961  696361 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35020->127.0.0.1:32888: read: connection reset by peer
	I1006 14:55:36.268055  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:55:36.268107  696361 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 14:55:36.268177  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:36.286749  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:36.287029  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:36.287044  696361 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 14:55:36.438131  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 14:55:36.438276  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:36.455780  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:36.455989  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:36.456006  696361 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:55:36.598528  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:55:36.598558  696361 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 14:55:36.598594  696361 ubuntu.go:190] setting up certificates
	I1006 14:55:36.598608  696361 provision.go:84] configureAuth start
	I1006 14:55:36.598671  696361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:55:36.615965  696361 provision.go:143] copyHostCerts
	I1006 14:55:36.616004  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:55:36.616065  696361 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 14:55:36.616086  696361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 14:55:36.616175  696361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 14:55:36.616305  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:55:36.616337  696361 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 14:55:36.616347  696361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 14:55:36.616392  696361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 14:55:36.616465  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:55:36.616495  696361 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 14:55:36.616506  696361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 14:55:36.616549  696361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 14:55:36.616693  696361 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 14:55:36.950020  696361 provision.go:177] copyRemoteCerts
	I1006 14:55:36.950096  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:55:36.950140  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:36.967901  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.069642  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 14:55:37.069695  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:55:37.087171  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 14:55:37.087278  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 14:55:37.104388  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 14:55:37.104471  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 14:55:37.121024  696361 provision.go:87] duration metric: took 522.404021ms to configureAuth
	I1006 14:55:37.121046  696361 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:55:37.121222  696361 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:55:37.121328  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.139234  696361 main.go:141] libmachine: Using SSH client type: native
	I1006 14:55:37.139495  696361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1006 14:55:37.139522  696361 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 14:55:37.394808  696361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 14:55:37.394835  696361 machine.go:96] duration metric: took 4.292002113s to provisionDockerMachine
	I1006 14:55:37.394849  696361 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 14:55:37.394860  696361 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:55:37.394929  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:55:37.394973  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.413054  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.514362  696361 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:55:37.517813  696361 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:55:37.517836  696361 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:55:37.517847  696361 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 14:55:37.517906  696361 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 14:55:37.518019  696361 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 14:55:37.518030  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 14:55:37.518152  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 14:55:37.525401  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:55:37.541908  696361 start.go:296] duration metric: took 147.043607ms for postStartSetup
	I1006 14:55:37.541980  696361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:55:37.542026  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.559403  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.657540  696361 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:55:37.662107  696361 fix.go:56] duration metric: took 4.857735821s for fixHost
	I1006 14:55:37.662133  696361 start.go:83] releasing machines lock for "ha-481559", held for 4.857782629s
	I1006 14:55:37.662199  696361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 14:55:37.679712  696361 ssh_runner.go:195] Run: cat /version.json
	I1006 14:55:37.679736  696361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:55:37.679759  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.679787  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:37.697300  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.697564  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:37.851243  696361 ssh_runner.go:195] Run: systemctl --version
	I1006 14:55:37.857782  696361 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 14:55:37.892065  696361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:55:37.896595  696361 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:55:37.896653  696361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:55:37.904304  696361 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:55:37.904326  696361 start.go:495] detecting cgroup driver to use...
	I1006 14:55:37.904354  696361 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 14:55:37.904388  696361 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:55:37.918633  696361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:55:37.929951  696361 docker.go:218] disabling cri-docker service (if available) ...
	I1006 14:55:37.930003  696361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 14:55:37.943242  696361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 14:55:37.954619  696361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 14:55:38.026399  696361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 14:55:38.105961  696361 docker.go:234] disabling docker service ...
	I1006 14:55:38.106042  696361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 14:55:38.120803  696361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 14:55:38.132404  696361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 14:55:38.209222  696361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 14:55:38.289009  696361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:55:38.301313  696361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:55:38.315068  696361 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 14:55:38.315130  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.323823  696361 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 14:55:38.323882  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.332351  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.340690  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.349706  696361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:55:38.357352  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.365990  696361 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.374123  696361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 14:55:38.382364  696361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:55:38.389293  696361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:55:38.396102  696361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:55:38.474259  696361 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 14:55:38.579652  696361 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 14:55:38.579712  696361 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 14:55:38.583658  696361 start.go:563] Will wait 60s for crictl version
	I1006 14:55:38.583711  696361 ssh_runner.go:195] Run: which crictl
	I1006 14:55:38.587093  696361 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:55:38.611002  696361 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 14:55:38.611081  696361 ssh_runner.go:195] Run: crio --version
	I1006 14:55:38.639866  696361 ssh_runner.go:195] Run: crio --version
	I1006 14:55:38.670329  696361 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 14:55:38.671337  696361 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:55:38.687899  696361 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:55:38.691971  696361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:55:38.702038  696361 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:55:38.702130  696361 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 14:55:38.702176  696361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:55:38.734706  696361 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:55:38.734729  696361 crio.go:433] Images already preloaded, skipping extraction
	I1006 14:55:38.734788  696361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 14:55:38.761257  696361 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 14:55:38.761292  696361 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:55:38.761302  696361 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 14:55:38.761450  696361 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:55:38.761537  696361 ssh_runner.go:195] Run: crio config
	I1006 14:55:38.806722  696361 cni.go:84] Creating CNI manager for ""
	I1006 14:55:38.806741  696361 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 14:55:38.806764  696361 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:55:38.806790  696361 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:55:38.806983  696361 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:55:38.807055  696361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:55:38.815286  696361 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:55:38.815345  696361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:55:38.822791  696361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 14:55:38.834974  696361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:55:38.846564  696361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 14:55:38.858492  696361 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:55:38.861799  696361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:55:38.871288  696361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:55:38.948793  696361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:55:38.968510  696361 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 14:55:38.968530  696361 certs.go:195] generating shared ca certs ...
	I1006 14:55:38.968554  696361 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:38.968714  696361 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 14:55:38.968769  696361 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 14:55:38.968783  696361 certs.go:257] generating profile certs ...
	I1006 14:55:38.968919  696361 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 14:55:38.968957  696361 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6
	I1006 14:55:38.968987  696361 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1006 14:55:39.196280  696361 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6 ...
	I1006 14:55:39.196312  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6: {Name:mk7f459b7d525b4f442071bb9a0260205e39346a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.196490  696361 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6 ...
	I1006 14:55:39.196502  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6: {Name:mk65b5fd8a8b6c5132068a16e7b4588d296da51b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.196576  696361 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt.ac196ca6 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt
	I1006 14:55:39.196721  696361 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key
	I1006 14:55:39.196852  696361 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 14:55:39.196869  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 14:55:39.196882  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 14:55:39.196896  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 14:55:39.196912  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 14:55:39.196924  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 14:55:39.196934  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 14:55:39.196944  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 14:55:39.196954  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 14:55:39.197000  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 14:55:39.197029  696361 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 14:55:39.197040  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:55:39.197063  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:55:39.197090  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:55:39.197112  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 14:55:39.197153  696361 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 14:55:39.197178  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.197233  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.197261  696361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.197782  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:55:39.216503  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 14:55:39.233130  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:55:39.249758  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 14:55:39.266471  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1006 14:55:39.282976  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 14:55:39.299460  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:55:39.316017  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 14:55:39.332799  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 14:55:39.349599  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:55:39.366033  696361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 14:55:39.382453  696361 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:55:39.394283  696361 ssh_runner.go:195] Run: openssl version
	I1006 14:55:39.400262  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 14:55:39.408362  696361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.411864  696361 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.411906  696361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 14:55:39.445875  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:55:39.453513  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:55:39.462629  696361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.466768  696361 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.466821  696361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:55:39.509791  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:55:39.520128  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 14:55:39.530496  696361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.534149  696361 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.534196  696361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 14:55:39.568028  696361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 14:55:39.575602  696361 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:55:39.579372  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:55:39.612721  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:55:39.646791  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:55:39.679847  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:55:39.713200  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:55:39.748057  696361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:55:39.783317  696361 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:55:39.783412  696361 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 14:55:39.783490  696361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 14:55:39.811664  696361 cri.go:89] found id: ""
	I1006 14:55:39.811742  696361 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:55:39.819581  696361 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:55:39.819601  696361 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:55:39.819653  696361 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:55:39.826854  696361 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:55:39.827270  696361 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:55:39.827381  696361 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-626179/kubeconfig needs updating (will repair): [kubeconfig missing "ha-481559" cluster setting kubeconfig missing "ha-481559" context setting]
	I1006 14:55:39.827726  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.828320  696361 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:55:39.828780  696361 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 14:55:39.828793  696361 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 14:55:39.828799  696361 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 14:55:39.828802  696361 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 14:55:39.828805  696361 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 14:55:39.828865  696361 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1006 14:55:39.829225  696361 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:55:39.836565  696361 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1006 14:55:39.836593  696361 kubeadm.go:601] duration metric: took 16.98578ms to restartPrimaryControlPlane
	I1006 14:55:39.836602  696361 kubeadm.go:402] duration metric: took 53.297464ms to StartCluster
	I1006 14:55:39.836618  696361 settings.go:142] acquiring lock: {Name:mk49b10f71f24d1f54d5c453b3b04e717e9a9100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.836679  696361 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:55:39.837293  696361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:55:39.837551  696361 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 14:55:39.837640  696361 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 14:55:39.837721  696361 addons.go:69] Setting storage-provisioner=true in profile "ha-481559"
	I1006 14:55:39.837737  696361 addons.go:238] Setting addon storage-provisioner=true in "ha-481559"
	I1006 14:55:39.837742  696361 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:55:39.837756  696361 addons.go:69] Setting default-storageclass=true in profile "ha-481559"
	I1006 14:55:39.837792  696361 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-481559"
	I1006 14:55:39.837774  696361 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:55:39.838098  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:39.838222  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:39.840891  696361 out.go:179] * Verifying Kubernetes components...
	I1006 14:55:39.841917  696361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:55:39.856657  696361 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 14:55:39.857025  696361 addons.go:238] Setting addon default-storageclass=true in "ha-481559"
	I1006 14:55:39.857071  696361 host.go:66] Checking if "ha-481559" exists ...
	I1006 14:55:39.857581  696361 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 14:55:39.858971  696361 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:55:39.860226  696361 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:55:39.860245  696361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:55:39.860299  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:39.882582  696361 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:55:39.882610  696361 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:55:39.882675  696361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 14:55:39.884044  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:39.900943  696361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 14:55:39.945526  696361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:55:39.958317  696361 node_ready.go:35] waiting up to 6m0s for node "ha-481559" to be "Ready" ...
	I1006 14:55:39.992150  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:55:40.007286  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:40.047812  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.047859  696361 retry.go:31] will retry after 364.057024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:40.064751  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.064789  696361 retry.go:31] will retry after 327.571737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.393452  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:55:40.413056  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:40.448723  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.448759  696361 retry.go:31] will retry after 403.141628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:40.468798  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.468834  696361 retry.go:31] will retry after 276.4293ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.746367  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:40.802524  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.802559  696361 retry.go:31] will retry after 311.376172ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.852754  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:40.906981  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:40.907021  696361 retry.go:31] will retry after 474.24301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.114995  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:41.170001  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.170049  696361 retry.go:31] will retry after 897.092965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.382425  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:41.437366  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:41.437397  696361 retry.go:31] will retry after 965.167019ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:41.958939  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:42.068134  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:42.122887  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:42.122925  696361 retry.go:31] will retry after 947.959168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:42.403332  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:42.457238  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:42.457275  696361 retry.go:31] will retry after 1.650071235s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:43.071967  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:43.125956  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:43.125994  696361 retry.go:31] will retry after 2.176788338s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:44.108266  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:44.161384  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:44.161417  696361 retry.go:31] will retry after 2.544730451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:44.459252  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:45.304030  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:45.359630  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:45.359670  696361 retry.go:31] will retry after 2.25019711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:46.459340  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:46.706682  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:46.759581  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:46.759615  696361 retry.go:31] will retry after 2.522056071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:47.610733  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:47.664269  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:47.664306  696361 retry.go:31] will retry after 4.640766085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:48.959157  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:49.282628  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:49.336384  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:49.336418  696361 retry.go:31] will retry after 5.673676228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:51.459087  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:52.305382  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:55:52.359321  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:52.359361  696361 retry.go:31] will retry after 9.481577286s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:53.959083  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:55.010721  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:55.065131  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:55.065171  696361 retry.go:31] will retry after 3.836963062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:56.459045  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:55:58.902488  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:55:58.955901  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:55:58.955935  696361 retry.go:31] will retry after 5.927536984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:55:58.959353  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:01.459047  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:01.841474  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:01.898521  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:01.898557  696361 retry.go:31] will retry after 4.904827501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:03.958922  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:04.884501  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:56:04.939279  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:04.939317  696361 retry.go:31] will retry after 7.40875545s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:05.959924  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:06.804327  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:06.857900  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:06.857929  696361 retry.go:31] will retry after 19.104468711s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:08.458883  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:10.459161  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:12.348374  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:56:12.402365  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:12.402403  696361 retry.go:31] will retry after 18.378132313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:12.959096  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:15.458930  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:17.459924  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:19.959250  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:22.459052  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:24.958967  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:25.962990  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:26.017478  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:26.017517  696361 retry.go:31] will retry after 29.077614598s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:26.959419  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:28.959649  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:30.781291  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:56:30.836228  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:30.836263  696361 retry.go:31] will retry after 39.344728119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:30.959929  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:33.459024  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:35.959931  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:38.459088  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:40.959025  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:43.459871  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:45.959871  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:48.460091  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:50.959024  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:52.959070  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:56:55.096159  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:56:55.150330  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:56:55.150378  696361 retry.go:31] will retry after 28.420260342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:56:55.459257  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:57.959372  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:56:59.959551  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:01.959698  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:04.459881  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:06.959832  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:08.959887  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:57:10.181504  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 14:57:10.237344  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:57:10.237488  696361 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1006 14:57:11.459330  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:13.959081  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:15.959422  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:18.459013  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:20.459598  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:22.959317  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 14:57:23.571775  696361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 14:57:23.627058  696361 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:57:23.627201  696361 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:57:23.629381  696361 out.go:179] * Enabled addons: 
	I1006 14:57:23.630438  696361 addons.go:514] duration metric: took 1m43.792792491s for enable addons: enabled=[]
	W1006 14:57:25.459171  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:27.959455  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:30.459027  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:32.459536  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:34.959010  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:36.959307  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:39.458947  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:41.459099  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:43.459520  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:45.958920  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:47.959333  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:49.959783  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:52.459003  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:54.459607  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:56.958916  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:57:58.959167  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:00.959819  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:03.459304  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:05.459888  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:07.959107  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:09.959766  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:12.459369  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:14.959038  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:17.459044  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:19.459387  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:21.958996  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:23.959805  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:26.459119  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:28.958951  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:30.959277  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:33.458997  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:35.459658  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:37.959015  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:39.959243  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:42.458930  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:44.959887  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:47.459910  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:49.959141  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:51.959846  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:54.459248  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:56.459883  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:58:58.959838  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:01.459249  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:03.459705  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:05.959114  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:07.959463  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:10.459041  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:12.459597  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:14.958979  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:16.959051  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:19.458975  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:21.459642  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:23.959276  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:26.458942  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:28.459065  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:30.459520  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:32.959342  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:35.458938  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:37.459167  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:39.459850  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:41.958991  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:43.959098  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:45.959838  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:48.459752  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:50.959541  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:53.459625  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:55.959176  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:57.959808  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 14:59:59.959875  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:02.459262  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:04.958980  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:06.959159  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:08.959328  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:11.459054  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:13.459614  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:15.959129  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:18.459016  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:20.459698  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:22.959471  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:25.459149  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:27.459732  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:29.959613  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:32.459295  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:34.959099  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:36.959423  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:39.458943  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:41.459732  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:43.959134  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:45.959809  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:48.458963  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:50.459737  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:52.959576  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:55.459157  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:00:57.959320  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:00.459128  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:02.459620  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:04.959253  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:07.459044  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:09.459679  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:11.959175  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:13.959922  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:16.459709  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:18.959266  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:20.959740  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:23.459313  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:25.959023  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:27.959694  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:30.459527  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:32.959353  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:35.459053  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:37.459295  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:39.459459  696361 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:01:39.959410  696361 node_ready.go:38] duration metric: took 6m0.001052975s for node "ha-481559" to be "Ready" ...
	I1006 15:01:39.961897  696361 out.go:203] 
	W1006 15:01:39.963068  696361 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1006 15:01:39.963087  696361 out.go:285] * 
	W1006 15:01:39.964873  696361 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 15:01:39.966045  696361 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 15:01:36 ha-481559 crio[517]: time="2025-10-06T15:01:36.082338227Z" level=info msg="createCtr: removing container 4b0059d47974523db945260c7699bd64dcbac10416cbee33980dd26f72a0e19c" id=d64e5b5a-11a9-4195-8099-7043018fdf73 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:36 ha-481559 crio[517]: time="2025-10-06T15:01:36.08238042Z" level=info msg="createCtr: deleting container 4b0059d47974523db945260c7699bd64dcbac10416cbee33980dd26f72a0e19c from storage" id=d64e5b5a-11a9-4195-8099-7043018fdf73 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:36 ha-481559 crio[517]: time="2025-10-06T15:01:36.084461881Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=d64e5b5a-11a9-4195-8099-7043018fdf73 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.055937244Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=3c466103-7198-402a-bd3d-953674f2632d name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.056852651Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=f0d9a52b-3192-43f3-bf76-09661ec8881f name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.057807142Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-481559/kube-apiserver" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.058066903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.06256213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.062985363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.075143902Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.076547924Z" level=info msg="createCtr: deleting container ID e0ac293b8abce3efed59ab591b6602d699b7d3400daa4a8fb2e55c69386948c1 from idIndex" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.076587388Z" level=info msg="createCtr: removing container e0ac293b8abce3efed59ab591b6602d699b7d3400daa4a8fb2e55c69386948c1" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.076627777Z" level=info msg="createCtr: deleting container e0ac293b8abce3efed59ab591b6602d699b7d3400daa4a8fb2e55c69386948c1 from storage" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:37 ha-481559 crio[517]: time="2025-10-06T15:01:37.078851699Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-481559_kube-system_b4e1cca8a09d3789a7e0862428dfe0db_0" id=668fb572-f5be-4b5c-abb2-7b17c5471825 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.055701382Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=f4a276b2-0e7f-4319-bcd6-09688ac1a5f7 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.056472784Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=4079efbd-8599-47d5-9e39-74a586381eb6 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.057301747Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-481559/kube-scheduler" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.057531285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.060907945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.061374158Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.076972344Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.078291327Z" level=info msg="createCtr: deleting container ID b963866bf433e8809f16886d1bda881f3ca7fa54ed1407283fefec553169d089 from idIndex" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.078328192Z" level=info msg="createCtr: removing container b963866bf433e8809f16886d1bda881f3ca7fa54ed1407283fefec553169d089" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.078383536Z" level=info msg="createCtr: deleting container b963866bf433e8809f16886d1bda881f3ca7fa54ed1407283fefec553169d089 from storage" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:01:39 ha-481559 crio[517]: time="2025-10-06T15:01:39.080448731Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=70ab66b7-406b-42f3-b058-05db47a1dcaa name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 15:01:44.331962    2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:01:44.332588    2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:01:44.334160    2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:01:44.334637    2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:01:44.336168    2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 15:01:44 up  5:43,  0 user,  load average: 0.09, 0.11, 0.15
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 15:01:36 ha-481559 kubelet[669]: E1006 15:01:36.084885     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:01:36 ha-481559 kubelet[669]:         container etcd start failed in pod etcd-ha-481559_kube-system(520c6060936b1c2aac479c99ed6c0355): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:36 ha-481559 kubelet[669]:  > logger="UnhandledError"
	Oct 06 15:01:36 ha-481559 kubelet[669]: E1006 15:01:36.084930     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	Oct 06 15:01:37 ha-481559 kubelet[669]: E1006 15:01:37.055476     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 15:01:37 ha-481559 kubelet[669]: E1006 15:01:37.079140     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:01:37 ha-481559 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:37 ha-481559 kubelet[669]:  > podSandboxID="68f64753e0c9bc4241bd77357a937ef42a17b59c9ee0eba280403f1335b5cc1e"
	Oct 06 15:01:37 ha-481559 kubelet[669]: E1006 15:01:37.079287     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:01:37 ha-481559 kubelet[669]:         container kube-apiserver start failed in pod kube-apiserver-ha-481559_kube-system(b4e1cca8a09d3789a7e0862428dfe0db): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:37 ha-481559 kubelet[669]:  > logger="UnhandledError"
	Oct 06 15:01:37 ha-481559 kubelet[669]: E1006 15:01:37.079323     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-481559" podUID="b4e1cca8a09d3789a7e0862428dfe0db"
	Oct 06 15:01:39 ha-481559 kubelet[669]: E1006 15:01:39.055306     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 15:01:39 ha-481559 kubelet[669]: E1006 15:01:39.070227     669 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	Oct 06 15:01:39 ha-481559 kubelet[669]: E1006 15:01:39.080716     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:01:39 ha-481559 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:39 ha-481559 kubelet[669]:  > podSandboxID="3b7ddccf443b7c3df7fe7a1aafb38b39a777788c5f30f29647a93377ee88f8e0"
	Oct 06 15:01:39 ha-481559 kubelet[669]: E1006 15:01:39.080803     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:01:39 ha-481559 kubelet[669]:         container kube-scheduler start failed in pod kube-scheduler-ha-481559_kube-system(cc93cb8d89afaa943672c70952b45174): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:01:39 ha-481559 kubelet[669]:  > logger="UnhandledError"
	Oct 06 15:01:39 ha-481559 kubelet[669]: E1006 15:01:39.080836     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-481559" podUID="cc93cb8d89afaa943672c70952b45174"
	Oct 06 15:01:41 ha-481559 kubelet[669]: E1006 15:01:41.692484     669 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 15:01:41 ha-481559 kubelet[669]: I1006 15:01:41.870779     669 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 15:01:41 ha-481559 kubelet[669]: E1006 15:01:41.871191     669 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 15:01:42 ha-481559 kubelet[669]: E1006 15:01:42.633135     669 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186beeb4a4cbaf58  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 14:55:39.044646744 +0000 UTC m=+0.073577294,LastTimestamp:2025-10-06 14:55:39.044646744 +0000 UTC m=+0.073577294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 2 (295.455213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (1.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-481559 stop --alsologtostderr -v 5: (1.206542293s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5: exit status 7 (66.790821ms)

                                                
                                                
-- stdout --
	ha-481559
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 15:01:45.963105  701927 out.go:360] Setting OutFile to fd 1 ...
	I1006 15:01:45.963393  701927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:45.963405  701927 out.go:374] Setting ErrFile to fd 2...
	I1006 15:01:45.963409  701927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:45.963624  701927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 15:01:45.963798  701927 out.go:368] Setting JSON to false
	I1006 15:01:45.963829  701927 mustload.go:65] Loading cluster: ha-481559
	I1006 15:01:45.963885  701927 notify.go:220] Checking for updates...
	I1006 15:01:45.964146  701927 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:45.964160  701927 status.go:174] checking status of ha-481559 ...
	I1006 15:01:45.964601  701927 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:45.981911  701927 status.go:371] ha-481559 host status = "Stopped" (err=<nil>)
	I1006 15:01:45.981947  701927 status.go:384] host is not running, skipping remaining checks
	I1006 15:01:45.981957  701927 status.go:176] ha-481559 status: &{Name:ha-481559 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5": ha-481559
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5": ha-481559
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-481559 status --alsologtostderr -v 5": ha-481559
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:55:32.848872757Z",
	            "FinishedAt": "2025-10-06T15:01:45.038433314Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 7 (68.159433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "ha-481559" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (1.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (368.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1006 15:01:53.593318  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:05:30.512717  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (6m7.248112359s)

                                                
                                                
-- stdout --
	* [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 15:01:46.116187  701984 out.go:360] Setting OutFile to fd 1 ...
	I1006 15:01:46.116327  701984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:46.116336  701984 out.go:374] Setting ErrFile to fd 2...
	I1006 15:01:46.116340  701984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:46.116564  701984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 15:01:46.116989  701984 out.go:368] Setting JSON to false
	I1006 15:01:46.117973  701984 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":20642,"bootTime":1759742264,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 15:01:46.118071  701984 start.go:140] virtualization: kvm guest
	I1006 15:01:46.119930  701984 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 15:01:46.121071  701984 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 15:01:46.121071  701984 notify.go:220] Checking for updates...
	I1006 15:01:46.123063  701984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 15:01:46.124433  701984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:01:46.125406  701984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 15:01:46.126304  701984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 15:01:46.127330  701984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 15:01:46.128989  701984 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:46.129680  701984 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 15:01:46.153833  701984 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 15:01:46.153923  701984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:01:46.210040  701984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 15:01:46.200236285 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 15:01:46.210147  701984 docker.go:318] overlay module found
	I1006 15:01:46.211692  701984 out.go:179] * Using the docker driver based on existing profile
	I1006 15:01:46.212596  701984 start.go:304] selected driver: docker
	I1006 15:01:46.212612  701984 start.go:924] validating driver "docker" against &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 15:01:46.212693  701984 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 15:01:46.212776  701984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:01:46.269605  701984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 15:01:46.258876471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 15:01:46.270302  701984 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 15:01:46.270329  701984 cni.go:84] Creating CNI manager for ""
	I1006 15:01:46.270373  701984 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 15:01:46.270419  701984 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1006 15:01:46.272125  701984 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 15:01:46.273048  701984 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 15:01:46.274095  701984 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 15:01:46.274969  701984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 15:01:46.275001  701984 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 15:01:46.275010  701984 cache.go:58] Caching tarball of preloaded images
	I1006 15:01:46.275079  701984 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 15:01:46.275089  701984 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 15:01:46.275081  701984 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 15:01:46.275176  701984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 15:01:46.295225  701984 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 15:01:46.295246  701984 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 15:01:46.295266  701984 cache.go:232] Successfully downloaded all kic artifacts
	I1006 15:01:46.295293  701984 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 15:01:46.295349  701984 start.go:364] duration metric: took 37.555µs to acquireMachinesLock for "ha-481559"
	I1006 15:01:46.295367  701984 start.go:96] Skipping create...Using existing machine configuration
	I1006 15:01:46.295375  701984 fix.go:54] fixHost starting: 
	I1006 15:01:46.295587  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:46.312275  701984 fix.go:112] recreateIfNeeded on ha-481559: state=Stopped err=<nil>
	W1006 15:01:46.312302  701984 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 15:01:46.314002  701984 out.go:252] * Restarting existing docker container for "ha-481559" ...
	I1006 15:01:46.314062  701984 cli_runner.go:164] Run: docker start ha-481559
	I1006 15:01:46.546450  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:46.564424  701984 kic.go:430] container "ha-481559" state is running.
	I1006 15:01:46.564772  701984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:46.582786  701984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 15:01:46.582997  701984 machine.go:93] provisionDockerMachine start ...
	I1006 15:01:46.583078  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:46.601452  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:46.601724  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:46.601739  701984 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 15:01:46.602337  701984 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35090->127.0.0.1:32893: read: connection reset by peer
	I1006 15:01:49.745932  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 15:01:49.745960  701984 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 15:01:49.746042  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:49.763495  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:49.763769  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:49.763784  701984 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 15:01:49.916644  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 15:01:49.916725  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:49.934847  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:49.935071  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:49.935089  701984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 15:01:50.079011  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 15:01:50.079055  701984 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 15:01:50.079077  701984 ubuntu.go:190] setting up certificates
	I1006 15:01:50.079088  701984 provision.go:84] configureAuth start
	I1006 15:01:50.079141  701984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:50.096776  701984 provision.go:143] copyHostCerts
	I1006 15:01:50.096843  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 15:01:50.096887  701984 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 15:01:50.096924  701984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 15:01:50.097001  701984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 15:01:50.097123  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 15:01:50.097151  701984 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 15:01:50.097159  701984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 15:01:50.097230  701984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 15:01:50.097381  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 15:01:50.097413  701984 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 15:01:50.097420  701984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 15:01:50.097468  701984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 15:01:50.097549  701984 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 15:01:50.447800  701984 provision.go:177] copyRemoteCerts
	I1006 15:01:50.447874  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 15:01:50.447927  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:50.465959  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:50.568789  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 15:01:50.568870  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 15:01:50.586702  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 15:01:50.586774  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 15:01:50.604720  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 15:01:50.604808  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 15:01:50.622688  701984 provision.go:87] duration metric: took 543.582589ms to configureAuth
	I1006 15:01:50.622726  701984 ubuntu.go:206] setting minikube options for container-runtime
	I1006 15:01:50.622909  701984 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:50.623013  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:50.640864  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:50.641165  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:50.641193  701984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 15:01:50.900815  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 15:01:50.900843  701984 machine.go:96] duration metric: took 4.317828783s to provisionDockerMachine
	I1006 15:01:50.900853  701984 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 15:01:50.900863  701984 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 15:01:50.900923  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 15:01:50.900961  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:50.918547  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.021081  701984 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 15:01:51.024764  701984 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 15:01:51.024788  701984 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 15:01:51.024798  701984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 15:01:51.024843  701984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 15:01:51.024912  701984 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 15:01:51.024927  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 15:01:51.025019  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 15:01:51.032826  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 15:01:51.050602  701984 start.go:296] duration metric: took 149.73063ms for postStartSetup
	I1006 15:01:51.050696  701984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 15:01:51.050748  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:51.068484  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.167707  701984 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 15:01:51.172531  701984 fix.go:56] duration metric: took 4.877147401s for fixHost
	I1006 15:01:51.172561  701984 start.go:83] releasing machines lock for "ha-481559", held for 4.877200795s
	I1006 15:01:51.172636  701984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:51.190941  701984 ssh_runner.go:195] Run: cat /version.json
	I1006 15:01:51.191006  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:51.191054  701984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 15:01:51.191134  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:51.209128  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.209584  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.362495  701984 ssh_runner.go:195] Run: systemctl --version
	I1006 15:01:51.369363  701984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 15:01:51.404999  701984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 15:01:51.409958  701984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 15:01:51.410028  701984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 15:01:51.418138  701984 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 15:01:51.418168  701984 start.go:495] detecting cgroup driver to use...
	I1006 15:01:51.418201  701984 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 15:01:51.418264  701984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 15:01:51.432500  701984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 15:01:51.444740  701984 docker.go:218] disabling cri-docker service (if available) ...
	I1006 15:01:51.444799  701984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 15:01:51.459568  701984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 15:01:51.472638  701984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 15:01:51.548093  701984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 15:01:51.629502  701984 docker.go:234] disabling docker service ...
	I1006 15:01:51.629574  701984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 15:01:51.643687  701984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 15:01:51.656528  701984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 15:01:51.734011  701984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 15:01:51.812779  701984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 15:01:51.825167  701984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 15:01:51.839186  701984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 15:01:51.839274  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.848529  701984 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 15:01:51.848608  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.857415  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.866115  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.874826  701984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 15:01:51.882836  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.891797  701984 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.900171  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.908782  701984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 15:01:51.916072  701984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 15:01:51.923289  701984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:51.999114  701984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 15:01:52.103785  701984 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 15:01:52.103847  701984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 15:01:52.107845  701984 start.go:563] Will wait 60s for crictl version
	I1006 15:01:52.107895  701984 ssh_runner.go:195] Run: which crictl
	I1006 15:01:52.111706  701984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 15:01:52.137020  701984 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 15:01:52.137126  701984 ssh_runner.go:195] Run: crio --version
	I1006 15:01:52.166358  701984 ssh_runner.go:195] Run: crio --version
	I1006 15:01:52.197148  701984 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 15:01:52.198353  701984 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 15:01:52.216087  701984 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 15:01:52.220573  701984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 15:01:52.231278  701984 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 15:01:52.231400  701984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 15:01:52.231450  701984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 15:01:52.264781  701984 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 15:01:52.264801  701984 crio.go:433] Images already preloaded, skipping extraction
	I1006 15:01:52.264844  701984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 15:01:52.291584  701984 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 15:01:52.291607  701984 cache_images.go:85] Images are preloaded, skipping loading
	I1006 15:01:52.291614  701984 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 15:01:52.291708  701984 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 15:01:52.291770  701984 ssh_runner.go:195] Run: crio config
	I1006 15:01:52.338567  701984 cni.go:84] Creating CNI manager for ""
	I1006 15:01:52.338589  701984 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 15:01:52.338610  701984 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 15:01:52.338632  701984 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 15:01:52.338744  701984 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 15:01:52.338801  701984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 15:01:52.347483  701984 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 15:01:52.347568  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 15:01:52.355357  701984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 15:01:52.367896  701984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 15:01:52.380296  701984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 15:01:52.392680  701984 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 15:01:52.396473  701984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 15:01:52.406328  701984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:52.485101  701984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 15:01:52.514051  701984 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 15:01:52.514073  701984 certs.go:195] generating shared ca certs ...
	I1006 15:01:52.514090  701984 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:52.514284  701984 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 15:01:52.514339  701984 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 15:01:52.514355  701984 certs.go:257] generating profile certs ...
	I1006 15:01:52.514462  701984 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 15:01:52.514544  701984 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6
	I1006 15:01:52.514595  701984 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 15:01:52.514610  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 15:01:52.514629  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 15:01:52.514646  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 15:01:52.514666  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 15:01:52.514682  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 15:01:52.514731  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 15:01:52.514762  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 15:01:52.514780  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 15:01:52.514855  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 15:01:52.514898  701984 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 15:01:52.514911  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 15:01:52.514943  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 15:01:52.514975  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 15:01:52.515013  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 15:01:52.515066  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 15:01:52.515159  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.515184  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.515222  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.515850  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 15:01:52.536297  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 15:01:52.555790  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 15:01:52.575066  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 15:01:52.597425  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1006 15:01:52.616188  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 15:01:52.633992  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 15:01:52.651317  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 15:01:52.668942  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 15:01:52.685650  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 15:01:52.702738  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 15:01:52.720514  701984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 15:01:52.732781  701984 ssh_runner.go:195] Run: openssl version
	I1006 15:01:52.739000  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 15:01:52.747351  701984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.751001  701984 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.751062  701984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.785464  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 15:01:52.793884  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 15:01:52.802527  701984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.806287  701984 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.806346  701984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.839905  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 15:01:52.847950  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 15:01:52.856269  701984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.859833  701984 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.859889  701984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.893744  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 15:01:52.902397  701984 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 15:01:52.906224  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 15:01:52.940584  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 15:01:52.975121  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 15:01:53.010068  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 15:01:53.056395  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 15:01:53.098917  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 15:01:53.133146  701984 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 15:01:53.133293  701984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 15:01:53.133350  701984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 15:01:53.161765  701984 cri.go:89] found id: ""
	I1006 15:01:53.161834  701984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 15:01:53.169767  701984 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 15:01:53.169786  701984 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 15:01:53.169835  701984 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 15:01:53.177348  701984 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 15:01:53.177860  701984 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:01:53.178037  701984 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-626179/kubeconfig needs updating (will repair): [kubeconfig missing "ha-481559" cluster setting kubeconfig missing "ha-481559" context setting]
	I1006 15:01:53.178466  701984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:53.179258  701984 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 15:01:53.179749  701984 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 15:01:53.179781  701984 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 15:01:53.179788  701984 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 15:01:53.179794  701984 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 15:01:53.179789  701984 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1006 15:01:53.179801  701984 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 15:01:53.180239  701984 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 15:01:53.188398  701984 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1006 15:01:53.188432  701984 kubeadm.go:601] duration metric: took 18.640424ms to restartPrimaryControlPlane
	I1006 15:01:53.188443  701984 kubeadm.go:402] duration metric: took 55.31048ms to StartCluster
	I1006 15:01:53.188464  701984 settings.go:142] acquiring lock: {Name:mk49b10f71f24d1f54d5c453b3b04e717e9a9100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:53.188537  701984 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:01:53.189024  701984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:53.189291  701984 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 15:01:53.189351  701984 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 15:01:53.189450  701984 addons.go:69] Setting storage-provisioner=true in profile "ha-481559"
	I1006 15:01:53.189472  701984 addons.go:238] Setting addon storage-provisioner=true in "ha-481559"
	I1006 15:01:53.189480  701984 addons.go:69] Setting default-storageclass=true in profile "ha-481559"
	I1006 15:01:53.189497  701984 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-481559"
	I1006 15:01:53.189510  701984 host.go:66] Checking if "ha-481559" exists ...
	I1006 15:01:53.189548  701984 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:53.189835  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:53.190004  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:53.192670  701984 out.go:179] * Verifying Kubernetes components...
	I1006 15:01:53.193943  701984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:53.209649  701984 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 15:01:53.210039  701984 addons.go:238] Setting addon default-storageclass=true in "ha-481559"
	I1006 15:01:53.210089  701984 host.go:66] Checking if "ha-481559" exists ...
	I1006 15:01:53.210542  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:53.211200  701984 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 15:01:53.212531  701984 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:53.212549  701984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 15:01:53.212600  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:53.238402  701984 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 15:01:53.238430  701984 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 15:01:53.238493  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:53.240785  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:53.257980  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:53.293467  701984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 15:01:53.307364  701984 node_ready.go:35] waiting up to 6m0s for node "ha-481559" to be "Ready" ...
	I1006 15:01:53.350572  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:53.365695  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:53.407298  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.407342  701984 retry.go:31] will retry after 357.649421ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:53.420853  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.420888  701984 retry.go:31] will retry after 373.269917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.765311  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:53.794914  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:53.820162  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.820198  701984 retry.go:31] will retry after 560.850722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:53.849381  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.849415  701984 retry.go:31] will retry after 534.611771ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.381588  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:54.385156  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:54.438225  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.438264  701984 retry.go:31] will retry after 554.670785ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:54.439112  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.439133  701984 retry.go:31] will retry after 308.986378ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.748751  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:54.803407  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.803442  701984 retry.go:31] will retry after 474.547882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.993194  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:55.046254  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.046297  701984 retry.go:31] will retry after 677.664195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.278726  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:55.308628  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:55.332936  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.332970  701984 retry.go:31] will retry after 1.775881807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.724438  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:55.776937  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.776969  701984 retry.go:31] will retry after 843.878196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:56.621961  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:56.675428  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:56.675463  701984 retry.go:31] will retry after 1.450357982s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:57.109402  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:57.163276  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:57.163309  701984 retry.go:31] will retry after 2.464163888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:57.308897  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:01:58.126261  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:58.179363  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:58.179391  701984 retry.go:31] will retry after 3.126763455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:59.628619  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:59.681154  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:59.681190  701984 retry.go:31] will retry after 1.480440704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:59.808774  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:01.162599  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:01.216807  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:01.216851  701984 retry.go:31] will retry after 3.761635647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:01.307128  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:01.362791  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:01.362827  701984 retry.go:31] will retry after 3.177813602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:01.808826  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:04.308637  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:04.540904  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:04.594444  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:04.594481  701984 retry.go:31] will retry after 9.418537731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:04.979473  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:05.032152  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:05.032191  701984 retry.go:31] will retry after 8.203513024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:06.808141  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:08.808703  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:11.308146  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:13.236126  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:13.291139  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:13.291178  701984 retry.go:31] will retry after 13.734152969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:13.308624  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:14.013259  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:14.066927  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:14.066963  701984 retry.go:31] will retry after 4.968343953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:15.808091  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:17.808317  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:19.035709  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:19.089785  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:19.089821  701984 retry.go:31] will retry after 18.450329534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:19.808717  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:22.308279  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:24.808005  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:26.808376  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:27.025657  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:27.079430  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:27.079467  701984 retry.go:31] will retry after 18.308744233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:28.808528  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:31.308878  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:33.808327  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:35.808829  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:37.540393  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:37.593965  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:37.593995  701984 retry.go:31] will retry after 14.430254714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:38.308827  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:40.808189  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:42.808607  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:44.808693  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:45.388851  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:45.443913  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:45.443945  701984 retry.go:31] will retry after 30.607683046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:47.309012  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:49.808000  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:51.808101  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:52.024419  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:52.078859  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:52.078891  701984 retry.go:31] will retry after 32.375753443s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:53.808234  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:55.808746  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:58.308064  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:00.308503  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:02.808227  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:04.808951  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:07.308259  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:09.308723  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:11.808466  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:13.808963  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:03:16.052424  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:03:16.106854  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:03:16.106899  701984 retry.go:31] will retry after 23.781842061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:03:16.308055  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:18.308668  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:20.808285  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:22.808988  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:03:24.455485  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:03:24.509566  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:03:24.509687  701984 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1006 15:03:25.308449  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:27.308947  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:29.808772  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:32.308333  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:34.308810  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:36.808133  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:38.808620  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:03:39.889153  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:03:39.944329  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:03:39.944473  701984 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 15:03:39.946959  701984 out.go:179] * Enabled addons: 
	I1006 15:03:39.947914  701984 addons.go:514] duration metric: took 1m46.758571336s for enable addons: enabled=[]
	W1006 15:03:41.308834  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:43.808716  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:46.308473  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:48.808081  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:50.808732  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:52.809075  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:55.308499  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:57.308770  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:59.308964  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:01.808320  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:03.808672  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:05.808747  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:07.808918  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:10.307950  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:12.307991  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:14.308152  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:16.808061  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:19.307993  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:21.308090  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:23.308313  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:25.807982  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:27.808970  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:30.308966  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:32.807967  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:34.808007  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:36.808048  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:38.809015  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:41.308101  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:43.308272  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:45.308962  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:47.808271  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:50.308958  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:52.808017  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:54.808283  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:57.307946  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:59.309045  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:01.808138  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:03.808398  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:06.308174  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:08.808983  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:11.307996  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:13.308266  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:15.808972  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:18.308060  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:20.309001  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:22.807955  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:25.309026  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:27.808933  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:30.307944  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:32.308185  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:34.308727  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:36.808124  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:39.308015  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:41.308156  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:43.308548  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:45.308597  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:47.308993  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:49.809063  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:52.308161  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:54.308340  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:56.808315  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:58.808798  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:01.308198  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:03.807981  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:06.308060  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:08.807934  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:10.808929  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:13.308149  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:15.308997  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:17.808931  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:20.308951  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:22.807942  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:25.308953  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:27.807967  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:29.808934  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:32.307960  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:34.308089  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:36.308173  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:38.308890  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:40.808860  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:43.308107  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:45.808973  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:47.809038  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:50.308996  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:52.807974  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:54.808028  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:57.308950  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:59.808908  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:02.308088  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:04.308444  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:06.308749  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:08.808774  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:10.808934  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:13.308241  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:15.807927  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:17.808956  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:20.309035  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:22.808059  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:25.308026  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:27.809087  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:30.307981  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:32.308029  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:34.308474  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:36.308569  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:38.808502  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:41.308145  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:43.807946  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:45.808852  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:48.308766  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:50.808719  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:52.809004  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:07:53.308060  701984 node_ready.go:38] duration metric: took 6m0.000216007s for node "ha-481559" to be "Ready" ...
	I1006 15:07:53.311054  701984 out.go:203] 
	W1006 15:07:53.312196  701984 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1006 15:07:53.312219  701984 out.go:285] * 
	* 
	W1006 15:07:53.313838  701984 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 15:07:53.315023  701984 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-481559 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 702186,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T15:01:46.338559643Z",
	            "FinishedAt": "2025-10-06T15:01:45.038433314Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96ad0a0c00ce1e2fd1255251fdbe6e26beae966a5054a86bbea20c89f537c09f",
	            "SandboxKey": "/var/run/docker/netns/96ad0a0c00ce",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32893"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:92:da:5b:3d:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "c5dcb77b8e9feae93629ab92a205600e06ab65076f80e1ea27e6fbc473fcf4ef",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 2 (304.401053ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node add --alsologtostderr -v 5                                                    │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node stop m02 --alsologtostderr -v 5                                               │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node start m02 --alsologtostderr -v 5                                              │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node list --alsologtostderr -v 5                                                   │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │                     │
	│ stop    │ ha-481559 stop --alsologtostderr -v 5                                                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │ 06 Oct 25 14:55 UTC │
	│ start   │ ha-481559 start --wait true --alsologtostderr -v 5                                           │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │                     │
	│ node    │ ha-481559 node list --alsologtostderr -v 5                                                   │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	│ node    │ ha-481559 node delete m03 --alsologtostderr -v 5                                             │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	│ stop    │ ha-481559 stop --alsologtostderr -v 5                                                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │ 06 Oct 25 15:01 UTC │
	│ start   │ ha-481559 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 15:01:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 15:01:46.116187  701984 out.go:360] Setting OutFile to fd 1 ...
	I1006 15:01:46.116327  701984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:46.116336  701984 out.go:374] Setting ErrFile to fd 2...
	I1006 15:01:46.116340  701984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:46.116564  701984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 15:01:46.116989  701984 out.go:368] Setting JSON to false
	I1006 15:01:46.117973  701984 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":20642,"bootTime":1759742264,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 15:01:46.118071  701984 start.go:140] virtualization: kvm guest
	I1006 15:01:46.119930  701984 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 15:01:46.121071  701984 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 15:01:46.121071  701984 notify.go:220] Checking for updates...
	I1006 15:01:46.123063  701984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 15:01:46.124433  701984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:01:46.125406  701984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 15:01:46.126304  701984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 15:01:46.127330  701984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 15:01:46.128989  701984 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:46.129680  701984 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 15:01:46.153833  701984 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 15:01:46.153923  701984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:01:46.210040  701984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 15:01:46.200236285 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 15:01:46.210147  701984 docker.go:318] overlay module found
	I1006 15:01:46.211692  701984 out.go:179] * Using the docker driver based on existing profile
	I1006 15:01:46.212596  701984 start.go:304] selected driver: docker
	I1006 15:01:46.212612  701984 start.go:924] validating driver "docker" against &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 15:01:46.212693  701984 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 15:01:46.212776  701984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:01:46.269605  701984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 15:01:46.258876471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 15:01:46.270302  701984 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 15:01:46.270329  701984 cni.go:84] Creating CNI manager for ""
	I1006 15:01:46.270373  701984 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 15:01:46.270419  701984 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1006 15:01:46.272125  701984 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 15:01:46.273048  701984 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 15:01:46.274095  701984 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 15:01:46.274969  701984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 15:01:46.275001  701984 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 15:01:46.275010  701984 cache.go:58] Caching tarball of preloaded images
	I1006 15:01:46.275079  701984 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 15:01:46.275089  701984 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 15:01:46.275081  701984 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 15:01:46.275176  701984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 15:01:46.295225  701984 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 15:01:46.295246  701984 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 15:01:46.295266  701984 cache.go:232] Successfully downloaded all kic artifacts
	I1006 15:01:46.295293  701984 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 15:01:46.295349  701984 start.go:364] duration metric: took 37.555µs to acquireMachinesLock for "ha-481559"
	I1006 15:01:46.295367  701984 start.go:96] Skipping create...Using existing machine configuration
	I1006 15:01:46.295375  701984 fix.go:54] fixHost starting: 
	I1006 15:01:46.295587  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:46.312275  701984 fix.go:112] recreateIfNeeded on ha-481559: state=Stopped err=<nil>
	W1006 15:01:46.312302  701984 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 15:01:46.314002  701984 out.go:252] * Restarting existing docker container for "ha-481559" ...
	I1006 15:01:46.314062  701984 cli_runner.go:164] Run: docker start ha-481559
	I1006 15:01:46.546450  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:46.564424  701984 kic.go:430] container "ha-481559" state is running.
	I1006 15:01:46.564772  701984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:46.582786  701984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 15:01:46.582997  701984 machine.go:93] provisionDockerMachine start ...
	I1006 15:01:46.583078  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:46.601452  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:46.601724  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:46.601739  701984 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 15:01:46.602337  701984 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35090->127.0.0.1:32893: read: connection reset by peer
	I1006 15:01:49.745932  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 15:01:49.745960  701984 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 15:01:49.746042  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:49.763495  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:49.763769  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:49.763784  701984 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 15:01:49.916644  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 15:01:49.916725  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:49.934847  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:49.935071  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:49.935089  701984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 15:01:50.079011  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 15:01:50.079055  701984 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 15:01:50.079077  701984 ubuntu.go:190] setting up certificates
	I1006 15:01:50.079088  701984 provision.go:84] configureAuth start
	I1006 15:01:50.079141  701984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:50.096776  701984 provision.go:143] copyHostCerts
	I1006 15:01:50.096843  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 15:01:50.096887  701984 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 15:01:50.096924  701984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 15:01:50.097001  701984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 15:01:50.097123  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 15:01:50.097151  701984 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 15:01:50.097159  701984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 15:01:50.097230  701984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 15:01:50.097381  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 15:01:50.097413  701984 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 15:01:50.097420  701984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 15:01:50.097468  701984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 15:01:50.097549  701984 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 15:01:50.447800  701984 provision.go:177] copyRemoteCerts
	I1006 15:01:50.447874  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 15:01:50.447927  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:50.465959  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:50.568789  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 15:01:50.568870  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 15:01:50.586702  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 15:01:50.586774  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 15:01:50.604720  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 15:01:50.604808  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 15:01:50.622688  701984 provision.go:87] duration metric: took 543.582589ms to configureAuth
	I1006 15:01:50.622726  701984 ubuntu.go:206] setting minikube options for container-runtime
	I1006 15:01:50.622909  701984 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:50.623013  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:50.640864  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:50.641165  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:50.641193  701984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 15:01:50.900815  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 15:01:50.900843  701984 machine.go:96] duration metric: took 4.317828783s to provisionDockerMachine
	I1006 15:01:50.900853  701984 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 15:01:50.900863  701984 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 15:01:50.900923  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 15:01:50.900961  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:50.918547  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.021081  701984 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 15:01:51.024764  701984 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 15:01:51.024788  701984 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 15:01:51.024798  701984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 15:01:51.024843  701984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 15:01:51.024912  701984 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 15:01:51.024927  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 15:01:51.025019  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 15:01:51.032826  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 15:01:51.050602  701984 start.go:296] duration metric: took 149.73063ms for postStartSetup
	I1006 15:01:51.050696  701984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 15:01:51.050748  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:51.068484  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.167707  701984 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 15:01:51.172531  701984 fix.go:56] duration metric: took 4.877147401s for fixHost
	I1006 15:01:51.172561  701984 start.go:83] releasing machines lock for "ha-481559", held for 4.877200795s
	I1006 15:01:51.172636  701984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:51.190941  701984 ssh_runner.go:195] Run: cat /version.json
	I1006 15:01:51.191006  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:51.191054  701984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 15:01:51.191134  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:51.209128  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.209584  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.362495  701984 ssh_runner.go:195] Run: systemctl --version
	I1006 15:01:51.369363  701984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 15:01:51.404999  701984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 15:01:51.409958  701984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 15:01:51.410028  701984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 15:01:51.418138  701984 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 15:01:51.418168  701984 start.go:495] detecting cgroup driver to use...
	I1006 15:01:51.418201  701984 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 15:01:51.418264  701984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 15:01:51.432500  701984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 15:01:51.444740  701984 docker.go:218] disabling cri-docker service (if available) ...
	I1006 15:01:51.444799  701984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 15:01:51.459568  701984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 15:01:51.472638  701984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 15:01:51.548093  701984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 15:01:51.629502  701984 docker.go:234] disabling docker service ...
	I1006 15:01:51.629574  701984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 15:01:51.643687  701984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 15:01:51.656528  701984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 15:01:51.734011  701984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 15:01:51.812779  701984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 15:01:51.825167  701984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 15:01:51.839186  701984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 15:01:51.839274  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.848529  701984 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 15:01:51.848608  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.857415  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.866115  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.874826  701984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 15:01:51.882836  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.891797  701984 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.900171  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.908782  701984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 15:01:51.916072  701984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 15:01:51.923289  701984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:51.999114  701984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 15:01:52.103785  701984 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 15:01:52.103847  701984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 15:01:52.107845  701984 start.go:563] Will wait 60s for crictl version
	I1006 15:01:52.107895  701984 ssh_runner.go:195] Run: which crictl
	I1006 15:01:52.111706  701984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 15:01:52.137020  701984 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 15:01:52.137126  701984 ssh_runner.go:195] Run: crio --version
	I1006 15:01:52.166358  701984 ssh_runner.go:195] Run: crio --version
	I1006 15:01:52.197148  701984 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 15:01:52.198353  701984 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 15:01:52.216087  701984 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 15:01:52.220573  701984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 15:01:52.231278  701984 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 15:01:52.231400  701984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 15:01:52.231450  701984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 15:01:52.264781  701984 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 15:01:52.264801  701984 crio.go:433] Images already preloaded, skipping extraction
	I1006 15:01:52.264844  701984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 15:01:52.291584  701984 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 15:01:52.291607  701984 cache_images.go:85] Images are preloaded, skipping loading
	I1006 15:01:52.291614  701984 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 15:01:52.291708  701984 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 15:01:52.291770  701984 ssh_runner.go:195] Run: crio config
	I1006 15:01:52.338567  701984 cni.go:84] Creating CNI manager for ""
	I1006 15:01:52.338589  701984 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 15:01:52.338610  701984 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 15:01:52.338632  701984 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 15:01:52.338744  701984 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 15:01:52.338801  701984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 15:01:52.347483  701984 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 15:01:52.347568  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 15:01:52.355357  701984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 15:01:52.367896  701984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 15:01:52.380296  701984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 15:01:52.392680  701984 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 15:01:52.396473  701984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 15:01:52.406328  701984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:52.485101  701984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 15:01:52.514051  701984 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 15:01:52.514073  701984 certs.go:195] generating shared ca certs ...
	I1006 15:01:52.514090  701984 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:52.514284  701984 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 15:01:52.514339  701984 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 15:01:52.514355  701984 certs.go:257] generating profile certs ...
	I1006 15:01:52.514462  701984 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 15:01:52.514544  701984 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6
	I1006 15:01:52.514595  701984 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 15:01:52.514610  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 15:01:52.514629  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 15:01:52.514646  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 15:01:52.514666  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 15:01:52.514682  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 15:01:52.514731  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 15:01:52.514762  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 15:01:52.514780  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 15:01:52.514855  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 15:01:52.514898  701984 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 15:01:52.514911  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 15:01:52.514943  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 15:01:52.514975  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 15:01:52.515013  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 15:01:52.515066  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 15:01:52.515159  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.515184  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.515222  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.515850  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 15:01:52.536297  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 15:01:52.555790  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 15:01:52.575066  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 15:01:52.597425  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1006 15:01:52.616188  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 15:01:52.633992  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 15:01:52.651317  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 15:01:52.668942  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 15:01:52.685650  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 15:01:52.702738  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 15:01:52.720514  701984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 15:01:52.732781  701984 ssh_runner.go:195] Run: openssl version
	I1006 15:01:52.739000  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 15:01:52.747351  701984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.751001  701984 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.751062  701984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.785464  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 15:01:52.793884  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 15:01:52.802527  701984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.806287  701984 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.806346  701984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.839905  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 15:01:52.847950  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 15:01:52.856269  701984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.859833  701984 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.859889  701984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.893744  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 15:01:52.902397  701984 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 15:01:52.906224  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 15:01:52.940584  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 15:01:52.975121  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 15:01:53.010068  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 15:01:53.056395  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 15:01:53.098917  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 15:01:53.133146  701984 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 15:01:53.133293  701984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 15:01:53.133350  701984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 15:01:53.161765  701984 cri.go:89] found id: ""
	I1006 15:01:53.161834  701984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 15:01:53.169767  701984 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 15:01:53.169786  701984 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 15:01:53.169835  701984 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 15:01:53.177348  701984 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 15:01:53.177860  701984 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:01:53.178037  701984 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-626179/kubeconfig needs updating (will repair): [kubeconfig missing "ha-481559" cluster setting kubeconfig missing "ha-481559" context setting]
	I1006 15:01:53.178466  701984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:53.179258  701984 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 15:01:53.179749  701984 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 15:01:53.179781  701984 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 15:01:53.179788  701984 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 15:01:53.179794  701984 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 15:01:53.179789  701984 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1006 15:01:53.179801  701984 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 15:01:53.180239  701984 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 15:01:53.188398  701984 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1006 15:01:53.188432  701984 kubeadm.go:601] duration metric: took 18.640424ms to restartPrimaryControlPlane
	I1006 15:01:53.188443  701984 kubeadm.go:402] duration metric: took 55.31048ms to StartCluster
	I1006 15:01:53.188464  701984 settings.go:142] acquiring lock: {Name:mk49b10f71f24d1f54d5c453b3b04e717e9a9100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:53.188537  701984 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:01:53.189024  701984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:53.189291  701984 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 15:01:53.189351  701984 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 15:01:53.189450  701984 addons.go:69] Setting storage-provisioner=true in profile "ha-481559"
	I1006 15:01:53.189472  701984 addons.go:238] Setting addon storage-provisioner=true in "ha-481559"
	I1006 15:01:53.189480  701984 addons.go:69] Setting default-storageclass=true in profile "ha-481559"
	I1006 15:01:53.189497  701984 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-481559"
	I1006 15:01:53.189510  701984 host.go:66] Checking if "ha-481559" exists ...
	I1006 15:01:53.189548  701984 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:53.189835  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:53.190004  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:53.192670  701984 out.go:179] * Verifying Kubernetes components...
	I1006 15:01:53.193943  701984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:53.209649  701984 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 15:01:53.210039  701984 addons.go:238] Setting addon default-storageclass=true in "ha-481559"
	I1006 15:01:53.210089  701984 host.go:66] Checking if "ha-481559" exists ...
	I1006 15:01:53.210542  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:53.211200  701984 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 15:01:53.212531  701984 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:53.212549  701984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 15:01:53.212600  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:53.238402  701984 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 15:01:53.238430  701984 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 15:01:53.238493  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:53.240785  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:53.257980  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:53.293467  701984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 15:01:53.307364  701984 node_ready.go:35] waiting up to 6m0s for node "ha-481559" to be "Ready" ...
	I1006 15:01:53.350572  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:53.365695  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:53.407298  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.407342  701984 retry.go:31] will retry after 357.649421ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:53.420853  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.420888  701984 retry.go:31] will retry after 373.269917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.765311  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:53.794914  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:53.820162  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.820198  701984 retry.go:31] will retry after 560.850722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:53.849381  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.849415  701984 retry.go:31] will retry after 534.611771ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.381588  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:54.385156  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:54.438225  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.438264  701984 retry.go:31] will retry after 554.670785ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:54.439112  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.439133  701984 retry.go:31] will retry after 308.986378ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.748751  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:54.803407  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.803442  701984 retry.go:31] will retry after 474.547882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.993194  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:55.046254  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.046297  701984 retry.go:31] will retry after 677.664195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.278726  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:55.308628  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:55.332936  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.332970  701984 retry.go:31] will retry after 1.775881807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.724438  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:55.776937  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.776969  701984 retry.go:31] will retry after 843.878196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:56.621961  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:56.675428  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:56.675463  701984 retry.go:31] will retry after 1.450357982s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:57.109402  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:57.163276  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:57.163309  701984 retry.go:31] will retry after 2.464163888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:57.308897  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:01:58.126261  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:58.179363  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:58.179391  701984 retry.go:31] will retry after 3.126763455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:59.628619  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:59.681154  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:59.681190  701984 retry.go:31] will retry after 1.480440704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:59.808774  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:01.162599  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:01.216807  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:01.216851  701984 retry.go:31] will retry after 3.761635647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:01.307128  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:01.362791  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:01.362827  701984 retry.go:31] will retry after 3.177813602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:01.808826  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:04.308637  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:04.540904  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:04.594444  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:04.594481  701984 retry.go:31] will retry after 9.418537731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:04.979473  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:05.032152  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:05.032191  701984 retry.go:31] will retry after 8.203513024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:06.808141  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:08.808703  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:11.308146  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:13.236126  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:13.291139  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:13.291178  701984 retry.go:31] will retry after 13.734152969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:13.308624  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:14.013259  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:14.066927  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:14.066963  701984 retry.go:31] will retry after 4.968343953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:15.808091  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:17.808317  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:19.035709  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:19.089785  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:19.089821  701984 retry.go:31] will retry after 18.450329534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:19.808717  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:22.308279  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:24.808005  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:26.808376  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:27.025657  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:27.079430  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:27.079467  701984 retry.go:31] will retry after 18.308744233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:28.808528  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:31.308878  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:33.808327  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:35.808829  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:37.540393  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:37.593965  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:37.593995  701984 retry.go:31] will retry after 14.430254714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:38.308827  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:40.808189  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:42.808607  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:44.808693  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:45.388851  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:45.443913  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:45.443945  701984 retry.go:31] will retry after 30.607683046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:47.309012  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:49.808000  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:51.808101  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:52.024419  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:52.078859  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:52.078891  701984 retry.go:31] will retry after 32.375753443s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:53.808234  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:55.808746  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:58.308064  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:00.308503  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:02.808227  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:04.808951  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:07.308259  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:09.308723  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:11.808466  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:13.808963  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:03:16.052424  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:03:16.106854  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:03:16.106899  701984 retry.go:31] will retry after 23.781842061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:03:16.308055  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:18.308668  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:20.808285  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:22.808988  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:03:24.455485  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:03:24.509566  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:03:24.509687  701984 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1006 15:03:25.308449  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:27.308947  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:29.808772  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:32.308333  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:34.308810  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:36.808133  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:38.808620  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:03:39.889153  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:03:39.944329  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:03:39.944473  701984 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 15:03:39.946959  701984 out.go:179] * Enabled addons: 
	I1006 15:03:39.947914  701984 addons.go:514] duration metric: took 1m46.758571336s for enable addons: enabled=[]
	W1006 15:03:41.308834  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:43.808716  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:46.308473  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:48.808081  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:50.808732  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:52.809075  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:55.308499  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:57.308770  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:59.308964  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:01.808320  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:03.808672  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:05.808747  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:07.808918  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:10.307950  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:12.307991  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:14.308152  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:16.808061  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:19.307993  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:21.308090  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:23.308313  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:25.807982  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:27.808970  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:30.308966  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:32.807967  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:34.808007  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:36.808048  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:38.809015  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:41.308101  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:43.308272  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:45.308962  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:47.808271  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:50.308958  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:52.808017  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:54.808283  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:57.307946  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:59.309045  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:01.808138  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:03.808398  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:06.308174  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:08.808983  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:11.307996  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:13.308266  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:15.808972  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:18.308060  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:20.309001  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:22.807955  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:25.309026  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:27.808933  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:30.307944  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:32.308185  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:34.308727  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:36.808124  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:39.308015  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:41.308156  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:43.308548  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:45.308597  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:47.308993  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:49.809063  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:52.308161  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:54.308340  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:56.808315  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:58.808798  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:01.308198  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:03.807981  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:06.308060  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:08.807934  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:10.808929  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:13.308149  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:15.308997  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:17.808931  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:20.308951  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:22.807942  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:25.308953  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:27.807967  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:29.808934  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:32.307960  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:34.308089  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:36.308173  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:38.308890  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:40.808860  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:43.308107  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:45.808973  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:47.809038  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:50.308996  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:52.807974  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:54.808028  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:57.308950  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:59.808908  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:02.308088  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:04.308444  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:06.308749  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:08.808774  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:10.808934  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:13.308241  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:15.807927  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:17.808956  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:20.309035  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:22.808059  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:25.308026  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:27.809087  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:30.307981  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:32.308029  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:34.308474  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:36.308569  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:38.808502  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:41.308145  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:43.807946  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:45.808852  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:48.308766  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:50.808719  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:52.809004  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:07:53.308060  701984 node_ready.go:38] duration metric: took 6m0.000216007s for node "ha-481559" to be "Ready" ...
	I1006 15:07:53.311054  701984 out.go:203] 
	W1006 15:07:53.312196  701984 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1006 15:07:53.312219  701984 out.go:285] * 
	W1006 15:07:53.313838  701984 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 15:07:53.315023  701984 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 15:07:45 ha-481559 crio[519]: time="2025-10-06T15:07:45.633715114Z" level=info msg="createCtr: deleting container 1c23465231411a2a5d53aafce9efcc8a9601423dbf737aed2fbc35c0cfd72666 from storage" id=15b183ac-4f30-496c-b4f5-2cf301336d6a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:45 ha-481559 crio[519]: time="2025-10-06T15:07:45.635418322Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=288a08ca-6816-4291-a0bd-1ce84792e8bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:45 ha-481559 crio[519]: time="2025-10-06T15:07:45.6357237Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-481559_kube-system_5f3181798721fe8691d871f051785efc_0" id=15b183ac-4f30-496c-b4f5-2cf301336d6a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.605899234Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=7dcaf3bc-ee04-4da2-9a70-40eb6f735cd0 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.606919742Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b1bb9b90-74e5-4120-8197-35710054bee3 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.607858943Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-481559/kube-apiserver" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.608081127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.612410997Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.61284047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.627946174Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.629186386Z" level=info msg="createCtr: deleting container ID 7cccc243360d2822c57ef267495d4ba2f52ac7d1a172de4f7bf86c2782752b95 from idIndex" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.629237885Z" level=info msg="createCtr: removing container 7cccc243360d2822c57ef267495d4ba2f52ac7d1a172de4f7bf86c2782752b95" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.629267049Z" level=info msg="createCtr: deleting container 7cccc243360d2822c57ef267495d4ba2f52ac7d1a172de4f7bf86c2782752b95 from storage" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.631063814Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-481559_kube-system_b4e1cca8a09d3789a7e0862428dfe0db_0" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.6056339Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=9d4ff92e-10b7-4cbd-a66f-12aec986be76 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.606711328Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=5d809907-2803-4774-bb2b-994147e1fe9e name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.607814333Z" level=info msg="Creating container: kube-system/etcd-ha-481559/etcd" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.608120994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.6122724Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.612720273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.635848688Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.637556379Z" level=info msg="createCtr: deleting container ID 5010fd13ff74bfb6cd5c840a91b2b7c210c7ea5032b47c702543d7ccf65b7d27 from idIndex" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.637596029Z" level=info msg="createCtr: removing container 5010fd13ff74bfb6cd5c840a91b2b7c210c7ea5032b47c702543d7ccf65b7d27" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.637629866Z" level=info msg="createCtr: deleting container 5010fd13ff74bfb6cd5c840a91b2b7c210c7ea5032b47c702543d7ccf65b7d27 from storage" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.643172574Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 15:07:54.242766    2022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:54.243342    2022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:54.244946    2022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:54.245444    2022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:54.248146    2022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 15:07:54 up  5:50,  0 user,  load average: 0.08, 0.04, 0.09
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 15:07:45 ha-481559 kubelet[675]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-481559_kube-system(5f3181798721fe8691d871f051785efc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:45 ha-481559 kubelet[675]:  > logger="UnhandledError"
	Oct 06 15:07:45 ha-481559 kubelet[675]: E1006 15:07:45.637136     675 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-481559" podUID="5f3181798721fe8691d871f051785efc"
	Oct 06 15:07:46 ha-481559 kubelet[675]: E1006 15:07:46.605434     675 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 15:07:46 ha-481559 kubelet[675]: E1006 15:07:46.631305     675 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:07:46 ha-481559 kubelet[675]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:46 ha-481559 kubelet[675]:  > podSandboxID="f7eda3d46c32414abdc80e3039e259073917785f77504bcad4aebf60db4c3330"
	Oct 06 15:07:46 ha-481559 kubelet[675]: E1006 15:07:46.631392     675 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:07:46 ha-481559 kubelet[675]:         container kube-apiserver start failed in pod kube-apiserver-ha-481559_kube-system(b4e1cca8a09d3789a7e0862428dfe0db): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:46 ha-481559 kubelet[675]:  > logger="UnhandledError"
	Oct 06 15:07:46 ha-481559 kubelet[675]: E1006 15:07:46.631421     675 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-481559" podUID="b4e1cca8a09d3789a7e0862428dfe0db"
	Oct 06 15:07:48 ha-481559 kubelet[675]: E1006 15:07:48.196145     675 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 06 15:07:48 ha-481559 kubelet[675]: E1006 15:07:48.248269     675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 15:07:48 ha-481559 kubelet[675]: I1006 15:07:48.419061     675 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 15:07:48 ha-481559 kubelet[675]: E1006 15:07:48.419541     675 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 15:07:51 ha-481559 kubelet[675]: E1006 15:07:51.129592     675 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bef0b9dfc36de  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 15:01:52.592541406 +0000 UTC m=+0.076229606,LastTimestamp:2025-10-06 15:01:52.592541406 +0000 UTC m=+0.076229606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 15:07:52 ha-481559 kubelet[675]: E1006 15:07:52.618976     675 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.605090     675 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.643581     675 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:07:53 ha-481559 kubelet[675]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:53 ha-481559 kubelet[675]:  > podSandboxID="2509df0fbb37ea26e7c4176db5318bb5b7bb232dde96912d6badc3737828a2f0"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.643723     675 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:07:53 ha-481559 kubelet[675]:         container etcd start failed in pod etcd-ha-481559_kube-system(520c6060936b1c2aac479c99ed6c0355): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:53 ha-481559 kubelet[675]:  > logger="UnhandledError"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.643767     675 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 2 (296.36766ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (368.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-481559" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-481559\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-481559\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-481559\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 702186,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T15:01:46.338559643Z",
	            "FinishedAt": "2025-10-06T15:01:45.038433314Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96ad0a0c00ce1e2fd1255251fdbe6e26beae966a5054a86bbea20c89f537c09f",
	            "SandboxKey": "/var/run/docker/netns/96ad0a0c00ce",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32893"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:92:da:5b:3d:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "c5dcb77b8e9feae93629ab92a205600e06ab65076f80e1ea27e6fbc473fcf4ef",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 2 (290.789774ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node add --alsologtostderr -v 5                                                    │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node stop m02 --alsologtostderr -v 5                                               │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node start m02 --alsologtostderr -v 5                                              │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node list --alsologtostderr -v 5                                                   │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │                     │
	│ stop    │ ha-481559 stop --alsologtostderr -v 5                                                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │ 06 Oct 25 14:55 UTC │
	│ start   │ ha-481559 start --wait true --alsologtostderr -v 5                                           │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │                     │
	│ node    │ ha-481559 node list --alsologtostderr -v 5                                                   │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	│ node    │ ha-481559 node delete m03 --alsologtostderr -v 5                                             │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	│ stop    │ ha-481559 stop --alsologtostderr -v 5                                                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │ 06 Oct 25 15:01 UTC │
	│ start   │ ha-481559 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 15:01:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 15:01:46.116187  701984 out.go:360] Setting OutFile to fd 1 ...
	I1006 15:01:46.116327  701984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:46.116336  701984 out.go:374] Setting ErrFile to fd 2...
	I1006 15:01:46.116340  701984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:46.116564  701984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 15:01:46.116989  701984 out.go:368] Setting JSON to false
	I1006 15:01:46.117973  701984 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":20642,"bootTime":1759742264,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 15:01:46.118071  701984 start.go:140] virtualization: kvm guest
	I1006 15:01:46.119930  701984 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 15:01:46.121071  701984 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 15:01:46.121071  701984 notify.go:220] Checking for updates...
	I1006 15:01:46.123063  701984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 15:01:46.124433  701984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:01:46.125406  701984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 15:01:46.126304  701984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 15:01:46.127330  701984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 15:01:46.128989  701984 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:46.129680  701984 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 15:01:46.153833  701984 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 15:01:46.153923  701984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:01:46.210040  701984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 15:01:46.200236285 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 15:01:46.210147  701984 docker.go:318] overlay module found
	I1006 15:01:46.211692  701984 out.go:179] * Using the docker driver based on existing profile
	I1006 15:01:46.212596  701984 start.go:304] selected driver: docker
	I1006 15:01:46.212612  701984 start.go:924] validating driver "docker" against &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 15:01:46.212693  701984 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 15:01:46.212776  701984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:01:46.269605  701984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 15:01:46.258876471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 15:01:46.270302  701984 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 15:01:46.270329  701984 cni.go:84] Creating CNI manager for ""
	I1006 15:01:46.270373  701984 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 15:01:46.270419  701984 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1006 15:01:46.272125  701984 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 15:01:46.273048  701984 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 15:01:46.274095  701984 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 15:01:46.274969  701984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 15:01:46.275001  701984 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 15:01:46.275010  701984 cache.go:58] Caching tarball of preloaded images
	I1006 15:01:46.275079  701984 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 15:01:46.275089  701984 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 15:01:46.275081  701984 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 15:01:46.275176  701984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 15:01:46.295225  701984 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 15:01:46.295246  701984 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 15:01:46.295266  701984 cache.go:232] Successfully downloaded all kic artifacts
	I1006 15:01:46.295293  701984 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 15:01:46.295349  701984 start.go:364] duration metric: took 37.555µs to acquireMachinesLock for "ha-481559"
	I1006 15:01:46.295367  701984 start.go:96] Skipping create...Using existing machine configuration
	I1006 15:01:46.295375  701984 fix.go:54] fixHost starting: 
	I1006 15:01:46.295587  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:46.312275  701984 fix.go:112] recreateIfNeeded on ha-481559: state=Stopped err=<nil>
	W1006 15:01:46.312302  701984 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 15:01:46.314002  701984 out.go:252] * Restarting existing docker container for "ha-481559" ...
	I1006 15:01:46.314062  701984 cli_runner.go:164] Run: docker start ha-481559
	I1006 15:01:46.546450  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:46.564424  701984 kic.go:430] container "ha-481559" state is running.
	I1006 15:01:46.564772  701984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:46.582786  701984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 15:01:46.582997  701984 machine.go:93] provisionDockerMachine start ...
	I1006 15:01:46.583078  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:46.601452  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:46.601724  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:46.601739  701984 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 15:01:46.602337  701984 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35090->127.0.0.1:32893: read: connection reset by peer
	I1006 15:01:49.745932  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 15:01:49.745960  701984 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 15:01:49.746042  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:49.763495  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:49.763769  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:49.763784  701984 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 15:01:49.916644  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 15:01:49.916725  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:49.934847  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:49.935071  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:49.935089  701984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 15:01:50.079011  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 15:01:50.079055  701984 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 15:01:50.079077  701984 ubuntu.go:190] setting up certificates
	I1006 15:01:50.079088  701984 provision.go:84] configureAuth start
	I1006 15:01:50.079141  701984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:50.096776  701984 provision.go:143] copyHostCerts
	I1006 15:01:50.096843  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 15:01:50.096887  701984 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 15:01:50.096924  701984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 15:01:50.097001  701984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 15:01:50.097123  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 15:01:50.097151  701984 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 15:01:50.097159  701984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 15:01:50.097230  701984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 15:01:50.097381  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 15:01:50.097413  701984 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 15:01:50.097420  701984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 15:01:50.097468  701984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 15:01:50.097549  701984 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 15:01:50.447800  701984 provision.go:177] copyRemoteCerts
	I1006 15:01:50.447874  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 15:01:50.447927  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:50.465959  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:50.568789  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 15:01:50.568870  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 15:01:50.586702  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 15:01:50.586774  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 15:01:50.604720  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 15:01:50.604808  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 15:01:50.622688  701984 provision.go:87] duration metric: took 543.582589ms to configureAuth
	I1006 15:01:50.622726  701984 ubuntu.go:206] setting minikube options for container-runtime
	I1006 15:01:50.622909  701984 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:50.623013  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:50.640864  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:50.641165  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:50.641193  701984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 15:01:50.900815  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 15:01:50.900843  701984 machine.go:96] duration metric: took 4.317828783s to provisionDockerMachine
	I1006 15:01:50.900853  701984 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 15:01:50.900863  701984 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 15:01:50.900923  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 15:01:50.900961  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:50.918547  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.021081  701984 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 15:01:51.024764  701984 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 15:01:51.024788  701984 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 15:01:51.024798  701984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 15:01:51.024843  701984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 15:01:51.024912  701984 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 15:01:51.024927  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 15:01:51.025019  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 15:01:51.032826  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 15:01:51.050602  701984 start.go:296] duration metric: took 149.73063ms for postStartSetup
	I1006 15:01:51.050696  701984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 15:01:51.050748  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:51.068484  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.167707  701984 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 15:01:51.172531  701984 fix.go:56] duration metric: took 4.877147401s for fixHost
	I1006 15:01:51.172561  701984 start.go:83] releasing machines lock for "ha-481559", held for 4.877200795s
	I1006 15:01:51.172636  701984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:51.190941  701984 ssh_runner.go:195] Run: cat /version.json
	I1006 15:01:51.191006  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:51.191054  701984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 15:01:51.191134  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:51.209128  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.209584  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.362495  701984 ssh_runner.go:195] Run: systemctl --version
	I1006 15:01:51.369363  701984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 15:01:51.404999  701984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 15:01:51.409958  701984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 15:01:51.410028  701984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 15:01:51.418138  701984 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 15:01:51.418168  701984 start.go:495] detecting cgroup driver to use...
	I1006 15:01:51.418201  701984 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 15:01:51.418264  701984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 15:01:51.432500  701984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 15:01:51.444740  701984 docker.go:218] disabling cri-docker service (if available) ...
	I1006 15:01:51.444799  701984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 15:01:51.459568  701984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 15:01:51.472638  701984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 15:01:51.548093  701984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 15:01:51.629502  701984 docker.go:234] disabling docker service ...
	I1006 15:01:51.629574  701984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 15:01:51.643687  701984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 15:01:51.656528  701984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 15:01:51.734011  701984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 15:01:51.812779  701984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 15:01:51.825167  701984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 15:01:51.839186  701984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 15:01:51.839274  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.848529  701984 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 15:01:51.848608  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.857415  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.866115  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.874826  701984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 15:01:51.882836  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.891797  701984 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.900171  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.908782  701984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 15:01:51.916072  701984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 15:01:51.923289  701984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:51.999114  701984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 15:01:52.103785  701984 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 15:01:52.103847  701984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 15:01:52.107845  701984 start.go:563] Will wait 60s for crictl version
	I1006 15:01:52.107895  701984 ssh_runner.go:195] Run: which crictl
	I1006 15:01:52.111706  701984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 15:01:52.137020  701984 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 15:01:52.137126  701984 ssh_runner.go:195] Run: crio --version
	I1006 15:01:52.166358  701984 ssh_runner.go:195] Run: crio --version
	I1006 15:01:52.197148  701984 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 15:01:52.198353  701984 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 15:01:52.216087  701984 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 15:01:52.220573  701984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 15:01:52.231278  701984 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 15:01:52.231400  701984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 15:01:52.231450  701984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 15:01:52.264781  701984 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 15:01:52.264801  701984 crio.go:433] Images already preloaded, skipping extraction
	I1006 15:01:52.264844  701984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 15:01:52.291584  701984 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 15:01:52.291607  701984 cache_images.go:85] Images are preloaded, skipping loading
	I1006 15:01:52.291614  701984 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 15:01:52.291708  701984 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 15:01:52.291770  701984 ssh_runner.go:195] Run: crio config
	I1006 15:01:52.338567  701984 cni.go:84] Creating CNI manager for ""
	I1006 15:01:52.338589  701984 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 15:01:52.338610  701984 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 15:01:52.338632  701984 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 15:01:52.338744  701984 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 15:01:52.338801  701984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 15:01:52.347483  701984 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 15:01:52.347568  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 15:01:52.355357  701984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 15:01:52.367896  701984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 15:01:52.380296  701984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 15:01:52.392680  701984 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 15:01:52.396473  701984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 15:01:52.406328  701984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:52.485101  701984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 15:01:52.514051  701984 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 15:01:52.514073  701984 certs.go:195] generating shared ca certs ...
	I1006 15:01:52.514090  701984 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:52.514284  701984 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 15:01:52.514339  701984 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 15:01:52.514355  701984 certs.go:257] generating profile certs ...
	I1006 15:01:52.514462  701984 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 15:01:52.514544  701984 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6
	I1006 15:01:52.514595  701984 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 15:01:52.514610  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 15:01:52.514629  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 15:01:52.514646  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 15:01:52.514666  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 15:01:52.514682  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 15:01:52.514731  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 15:01:52.514762  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 15:01:52.514780  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 15:01:52.514855  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 15:01:52.514898  701984 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 15:01:52.514911  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 15:01:52.514943  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 15:01:52.514975  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 15:01:52.515013  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 15:01:52.515066  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 15:01:52.515159  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.515184  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.515222  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.515850  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 15:01:52.536297  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 15:01:52.555790  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 15:01:52.575066  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 15:01:52.597425  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1006 15:01:52.616188  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 15:01:52.633992  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 15:01:52.651317  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 15:01:52.668942  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 15:01:52.685650  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 15:01:52.702738  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 15:01:52.720514  701984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 15:01:52.732781  701984 ssh_runner.go:195] Run: openssl version
	I1006 15:01:52.739000  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 15:01:52.747351  701984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.751001  701984 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.751062  701984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.785464  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 15:01:52.793884  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 15:01:52.802527  701984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.806287  701984 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.806346  701984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.839905  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 15:01:52.847950  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 15:01:52.856269  701984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.859833  701984 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.859889  701984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.893744  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 15:01:52.902397  701984 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 15:01:52.906224  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 15:01:52.940584  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 15:01:52.975121  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 15:01:53.010068  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 15:01:53.056395  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 15:01:53.098917  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 15:01:53.133146  701984 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 15:01:53.133293  701984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 15:01:53.133350  701984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 15:01:53.161765  701984 cri.go:89] found id: ""
	I1006 15:01:53.161834  701984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 15:01:53.169767  701984 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 15:01:53.169786  701984 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 15:01:53.169835  701984 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 15:01:53.177348  701984 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 15:01:53.177860  701984 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:01:53.178037  701984 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-626179/kubeconfig needs updating (will repair): [kubeconfig missing "ha-481559" cluster setting kubeconfig missing "ha-481559" context setting]
	I1006 15:01:53.178466  701984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:53.179258  701984 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 15:01:53.179749  701984 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 15:01:53.179781  701984 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 15:01:53.179788  701984 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 15:01:53.179794  701984 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 15:01:53.179789  701984 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1006 15:01:53.179801  701984 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 15:01:53.180239  701984 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 15:01:53.188398  701984 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1006 15:01:53.188432  701984 kubeadm.go:601] duration metric: took 18.640424ms to restartPrimaryControlPlane
	I1006 15:01:53.188443  701984 kubeadm.go:402] duration metric: took 55.31048ms to StartCluster
	I1006 15:01:53.188464  701984 settings.go:142] acquiring lock: {Name:mk49b10f71f24d1f54d5c453b3b04e717e9a9100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:53.188537  701984 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:01:53.189024  701984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:53.189291  701984 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 15:01:53.189351  701984 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 15:01:53.189450  701984 addons.go:69] Setting storage-provisioner=true in profile "ha-481559"
	I1006 15:01:53.189472  701984 addons.go:238] Setting addon storage-provisioner=true in "ha-481559"
	I1006 15:01:53.189480  701984 addons.go:69] Setting default-storageclass=true in profile "ha-481559"
	I1006 15:01:53.189497  701984 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-481559"
	I1006 15:01:53.189510  701984 host.go:66] Checking if "ha-481559" exists ...
	I1006 15:01:53.189548  701984 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:53.189835  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:53.190004  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:53.192670  701984 out.go:179] * Verifying Kubernetes components...
	I1006 15:01:53.193943  701984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:53.209649  701984 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 15:01:53.210039  701984 addons.go:238] Setting addon default-storageclass=true in "ha-481559"
	I1006 15:01:53.210089  701984 host.go:66] Checking if "ha-481559" exists ...
	I1006 15:01:53.210542  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:53.211200  701984 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 15:01:53.212531  701984 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:53.212549  701984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 15:01:53.212600  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:53.238402  701984 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 15:01:53.238430  701984 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 15:01:53.238493  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:53.240785  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:53.257980  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:53.293467  701984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 15:01:53.307364  701984 node_ready.go:35] waiting up to 6m0s for node "ha-481559" to be "Ready" ...
	I1006 15:01:53.350572  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:53.365695  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:53.407298  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.407342  701984 retry.go:31] will retry after 357.649421ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:53.420853  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.420888  701984 retry.go:31] will retry after 373.269917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.765311  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:53.794914  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:53.820162  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.820198  701984 retry.go:31] will retry after 560.850722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:53.849381  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.849415  701984 retry.go:31] will retry after 534.611771ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.381588  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:54.385156  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:54.438225  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.438264  701984 retry.go:31] will retry after 554.670785ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:54.439112  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.439133  701984 retry.go:31] will retry after 308.986378ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.748751  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:54.803407  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.803442  701984 retry.go:31] will retry after 474.547882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.993194  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:55.046254  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.046297  701984 retry.go:31] will retry after 677.664195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.278726  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:55.308628  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:55.332936  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.332970  701984 retry.go:31] will retry after 1.775881807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.724438  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:55.776937  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.776969  701984 retry.go:31] will retry after 843.878196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:56.621961  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:56.675428  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:56.675463  701984 retry.go:31] will retry after 1.450357982s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:57.109402  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:57.163276  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:57.163309  701984 retry.go:31] will retry after 2.464163888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:57.308897  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:01:58.126261  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:58.179363  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:58.179391  701984 retry.go:31] will retry after 3.126763455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:59.628619  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:59.681154  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:59.681190  701984 retry.go:31] will retry after 1.480440704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:59.808774  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:01.162599  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:01.216807  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:01.216851  701984 retry.go:31] will retry after 3.761635647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:01.307128  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:01.362791  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:01.362827  701984 retry.go:31] will retry after 3.177813602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:01.808826  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:04.308637  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:04.540904  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:04.594444  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:04.594481  701984 retry.go:31] will retry after 9.418537731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:04.979473  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:05.032152  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:05.032191  701984 retry.go:31] will retry after 8.203513024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:06.808141  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:08.808703  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:11.308146  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:13.236126  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:13.291139  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:13.291178  701984 retry.go:31] will retry after 13.734152969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:13.308624  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:14.013259  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:14.066927  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:14.066963  701984 retry.go:31] will retry after 4.968343953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:15.808091  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:17.808317  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:19.035709  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:19.089785  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:19.089821  701984 retry.go:31] will retry after 18.450329534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:19.808717  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:22.308279  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:24.808005  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:26.808376  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:27.025657  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:27.079430  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:27.079467  701984 retry.go:31] will retry after 18.308744233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:28.808528  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:31.308878  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:33.808327  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:35.808829  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:37.540393  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:37.593965  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:37.593995  701984 retry.go:31] will retry after 14.430254714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:38.308827  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:40.808189  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:42.808607  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:44.808693  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:45.388851  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:45.443913  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:45.443945  701984 retry.go:31] will retry after 30.607683046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:47.309012  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:49.808000  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:51.808101  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:52.024419  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:52.078859  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:52.078891  701984 retry.go:31] will retry after 32.375753443s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:53.808234  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:55.808746  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:58.308064  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:00.308503  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:02.808227  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:04.808951  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:07.308259  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:09.308723  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:11.808466  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:13.808963  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:03:16.052424  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:03:16.106854  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:03:16.106899  701984 retry.go:31] will retry after 23.781842061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:03:16.308055  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:18.308668  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:20.808285  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:22.808988  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:03:24.455485  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:03:24.509566  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:03:24.509687  701984 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1006 15:03:25.308449  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:27.308947  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:29.808772  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:32.308333  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:34.308810  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:36.808133  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:38.808620  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:03:39.889153  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:03:39.944329  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:03:39.944473  701984 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 15:03:39.946959  701984 out.go:179] * Enabled addons: 
	I1006 15:03:39.947914  701984 addons.go:514] duration metric: took 1m46.758571336s for enable addons: enabled=[]
	W1006 15:03:41.308834  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:43.808716  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:46.308473  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:48.808081  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:50.808732  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:52.809075  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:55.308499  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:57.308770  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:59.308964  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:01.808320  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:03.808672  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:05.808747  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:07.808918  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:10.307950  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:12.307991  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:14.308152  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:16.808061  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:19.307993  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:21.308090  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:23.308313  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:25.807982  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:27.808970  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:30.308966  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:32.807967  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:34.808007  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:36.808048  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:38.809015  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:41.308101  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:43.308272  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:45.308962  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:47.808271  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:50.308958  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:52.808017  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:54.808283  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:57.307946  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:59.309045  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:01.808138  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:03.808398  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:06.308174  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:08.808983  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:11.307996  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:13.308266  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:15.808972  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:18.308060  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:20.309001  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:22.807955  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:25.309026  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:27.808933  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:30.307944  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:32.308185  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:34.308727  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:36.808124  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:39.308015  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:41.308156  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:43.308548  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:45.308597  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:47.308993  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:49.809063  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:52.308161  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:54.308340  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:56.808315  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:58.808798  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:01.308198  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:03.807981  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:06.308060  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:08.807934  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:10.808929  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:13.308149  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:15.308997  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:17.808931  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:20.308951  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:22.807942  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:25.308953  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:27.807967  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:29.808934  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:32.307960  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:34.308089  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:36.308173  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:38.308890  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:40.808860  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:43.308107  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:45.808973  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:47.809038  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:50.308996  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:52.807974  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:54.808028  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:57.308950  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:59.808908  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:02.308088  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:04.308444  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:06.308749  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:08.808774  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:10.808934  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:13.308241  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:15.807927  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:17.808956  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:20.309035  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:22.808059  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:25.308026  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:27.809087  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:30.307981  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:32.308029  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:34.308474  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:36.308569  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:38.808502  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:41.308145  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:43.807946  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:45.808852  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:48.308766  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:50.808719  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:52.809004  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:07:53.308060  701984 node_ready.go:38] duration metric: took 6m0.000216007s for node "ha-481559" to be "Ready" ...
	I1006 15:07:53.311054  701984 out.go:203] 
	W1006 15:07:53.312196  701984 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1006 15:07:53.312219  701984 out.go:285] * 
	W1006 15:07:53.313838  701984 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 15:07:53.315023  701984 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 15:07:45 ha-481559 crio[519]: time="2025-10-06T15:07:45.633715114Z" level=info msg="createCtr: deleting container 1c23465231411a2a5d53aafce9efcc8a9601423dbf737aed2fbc35c0cfd72666 from storage" id=15b183ac-4f30-496c-b4f5-2cf301336d6a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:45 ha-481559 crio[519]: time="2025-10-06T15:07:45.635418322Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-481559_kube-system_cc93cb8d89afaa943672c70952b45174_0" id=288a08ca-6816-4291-a0bd-1ce84792e8bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:45 ha-481559 crio[519]: time="2025-10-06T15:07:45.6357237Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-481559_kube-system_5f3181798721fe8691d871f051785efc_0" id=15b183ac-4f30-496c-b4f5-2cf301336d6a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.605899234Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=7dcaf3bc-ee04-4da2-9a70-40eb6f735cd0 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.606919742Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b1bb9b90-74e5-4120-8197-35710054bee3 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.607858943Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-481559/kube-apiserver" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.608081127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.612410997Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.61284047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.627946174Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.629186386Z" level=info msg="createCtr: deleting container ID 7cccc243360d2822c57ef267495d4ba2f52ac7d1a172de4f7bf86c2782752b95 from idIndex" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.629237885Z" level=info msg="createCtr: removing container 7cccc243360d2822c57ef267495d4ba2f52ac7d1a172de4f7bf86c2782752b95" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.629267049Z" level=info msg="createCtr: deleting container 7cccc243360d2822c57ef267495d4ba2f52ac7d1a172de4f7bf86c2782752b95 from storage" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.631063814Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-481559_kube-system_b4e1cca8a09d3789a7e0862428dfe0db_0" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.6056339Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=9d4ff92e-10b7-4cbd-a66f-12aec986be76 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.606711328Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=5d809907-2803-4774-bb2b-994147e1fe9e name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.607814333Z" level=info msg="Creating container: kube-system/etcd-ha-481559/etcd" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.608120994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.6122724Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.612720273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.635848688Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.637556379Z" level=info msg="createCtr: deleting container ID 5010fd13ff74bfb6cd5c840a91b2b7c210c7ea5032b47c702543d7ccf65b7d27 from idIndex" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.637596029Z" level=info msg="createCtr: removing container 5010fd13ff74bfb6cd5c840a91b2b7c210c7ea5032b47c702543d7ccf65b7d27" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.637629866Z" level=info msg="createCtr: deleting container 5010fd13ff74bfb6cd5c840a91b2b7c210c7ea5032b47c702543d7ccf65b7d27 from storage" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.643172574Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 15:07:55.805240    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:55.805817    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:55.807375    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:55.807802    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:55.809342    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 15:07:55 up  5:50,  0 user,  load average: 0.08, 0.04, 0.09
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 15:07:46 ha-481559 kubelet[675]: E1006 15:07:46.605434     675 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 15:07:46 ha-481559 kubelet[675]: E1006 15:07:46.631305     675 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:07:46 ha-481559 kubelet[675]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:46 ha-481559 kubelet[675]:  > podSandboxID="f7eda3d46c32414abdc80e3039e259073917785f77504bcad4aebf60db4c3330"
	Oct 06 15:07:46 ha-481559 kubelet[675]: E1006 15:07:46.631392     675 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:07:46 ha-481559 kubelet[675]:         container kube-apiserver start failed in pod kube-apiserver-ha-481559_kube-system(b4e1cca8a09d3789a7e0862428dfe0db): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:46 ha-481559 kubelet[675]:  > logger="UnhandledError"
	Oct 06 15:07:46 ha-481559 kubelet[675]: E1006 15:07:46.631421     675 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-481559" podUID="b4e1cca8a09d3789a7e0862428dfe0db"
	Oct 06 15:07:48 ha-481559 kubelet[675]: E1006 15:07:48.196145     675 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 06 15:07:48 ha-481559 kubelet[675]: E1006 15:07:48.248269     675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 15:07:48 ha-481559 kubelet[675]: I1006 15:07:48.419061     675 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 15:07:48 ha-481559 kubelet[675]: E1006 15:07:48.419541     675 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 15:07:51 ha-481559 kubelet[675]: E1006 15:07:51.129592     675 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bef0b9dfc36de  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 15:01:52.592541406 +0000 UTC m=+0.076229606,LastTimestamp:2025-10-06 15:01:52.592541406 +0000 UTC m=+0.076229606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 15:07:52 ha-481559 kubelet[675]: E1006 15:07:52.618976     675 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.605090     675 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.643581     675 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:07:53 ha-481559 kubelet[675]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:53 ha-481559 kubelet[675]:  > podSandboxID="2509df0fbb37ea26e7c4176db5318bb5b7bb232dde96912d6badc3737828a2f0"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.643723     675 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:07:53 ha-481559 kubelet[675]:         container etcd start failed in pod etcd-ha-481559_kube-system(520c6060936b1c2aac479c99ed6c0355): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:53 ha-481559 kubelet[675]:  > logger="UnhandledError"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.643767     675 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	Oct 06 15:07:55 ha-481559 kubelet[675]: E1006 15:07:55.248957     675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 15:07:55 ha-481559 kubelet[675]: I1006 15:07:55.421634     675 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 15:07:55 ha-481559 kubelet[675]: E1006 15:07:55.422098     675 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 2 (289.759194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481559 node add --control-plane --alsologtostderr -v 5: exit status 103 (247.437226ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-481559 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-481559"

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 15:07:56.226988  706663 out.go:360] Setting OutFile to fd 1 ...
	I1006 15:07:56.227115  706663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:07:56.227123  706663 out.go:374] Setting ErrFile to fd 2...
	I1006 15:07:56.227130  706663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:07:56.227392  706663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 15:07:56.227725  706663 mustload.go:65] Loading cluster: ha-481559
	I1006 15:07:56.228119  706663 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:07:56.228550  706663 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:07:56.246446  706663 host.go:66] Checking if "ha-481559" exists ...
	I1006 15:07:56.246718  706663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:07:56.302979  706663 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 15:07:56.292624154 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 15:07:56.303142  706663 api_server.go:166] Checking apiserver status ...
	I1006 15:07:56.303196  706663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 15:07:56.303286  706663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:07:56.319820  706663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	W1006 15:07:56.422795  706663 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 15:07:56.424768  706663 out.go:179] * The control-plane node ha-481559 apiserver is not running: (state=Stopped)
	I1006 15:07:56.425920  706663 out.go:179]   To start a cluster, run: "minikube start -p ha-481559"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-481559 node add --control-plane --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 702186,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T15:01:46.338559643Z",
	            "FinishedAt": "2025-10-06T15:01:45.038433314Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96ad0a0c00ce1e2fd1255251fdbe6e26beae966a5054a86bbea20c89f537c09f",
	            "SandboxKey": "/var/run/docker/netns/96ad0a0c00ce",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32893"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:92:da:5b:3d:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "c5dcb77b8e9feae93629ab92a205600e06ab65076f80e1ea27e6fbc473fcf4ef",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 2 (284.985948ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node add --alsologtostderr -v 5                                                    │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node stop m02 --alsologtostderr -v 5                                               │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node start m02 --alsologtostderr -v 5                                              │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node list --alsologtostderr -v 5                                                   │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │                     │
	│ stop    │ ha-481559 stop --alsologtostderr -v 5                                                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │ 06 Oct 25 14:55 UTC │
	│ start   │ ha-481559 start --wait true --alsologtostderr -v 5                                           │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │                     │
	│ node    │ ha-481559 node list --alsologtostderr -v 5                                                   │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	│ node    │ ha-481559 node delete m03 --alsologtostderr -v 5                                             │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	│ stop    │ ha-481559 stop --alsologtostderr -v 5                                                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │ 06 Oct 25 15:01 UTC │
	│ start   │ ha-481559 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	│ node    │ ha-481559 node add --control-plane --alsologtostderr -v 5                                    │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 15:01:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 15:01:46.116187  701984 out.go:360] Setting OutFile to fd 1 ...
	I1006 15:01:46.116327  701984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:46.116336  701984 out.go:374] Setting ErrFile to fd 2...
	I1006 15:01:46.116340  701984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:46.116564  701984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 15:01:46.116989  701984 out.go:368] Setting JSON to false
	I1006 15:01:46.117973  701984 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":20642,"bootTime":1759742264,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 15:01:46.118071  701984 start.go:140] virtualization: kvm guest
	I1006 15:01:46.119930  701984 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 15:01:46.121071  701984 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 15:01:46.121071  701984 notify.go:220] Checking for updates...
	I1006 15:01:46.123063  701984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 15:01:46.124433  701984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:01:46.125406  701984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 15:01:46.126304  701984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 15:01:46.127330  701984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 15:01:46.128989  701984 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:46.129680  701984 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 15:01:46.153833  701984 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 15:01:46.153923  701984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:01:46.210040  701984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 15:01:46.200236285 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 15:01:46.210147  701984 docker.go:318] overlay module found
	I1006 15:01:46.211692  701984 out.go:179] * Using the docker driver based on existing profile
	I1006 15:01:46.212596  701984 start.go:304] selected driver: docker
	I1006 15:01:46.212612  701984 start.go:924] validating driver "docker" against &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 15:01:46.212693  701984 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 15:01:46.212776  701984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:01:46.269605  701984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 15:01:46.258876471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 15:01:46.270302  701984 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 15:01:46.270329  701984 cni.go:84] Creating CNI manager for ""
	I1006 15:01:46.270373  701984 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 15:01:46.270419  701984 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1006 15:01:46.272125  701984 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 15:01:46.273048  701984 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 15:01:46.274095  701984 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 15:01:46.274969  701984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 15:01:46.275001  701984 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 15:01:46.275010  701984 cache.go:58] Caching tarball of preloaded images
	I1006 15:01:46.275079  701984 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 15:01:46.275089  701984 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 15:01:46.275081  701984 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 15:01:46.275176  701984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 15:01:46.295225  701984 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 15:01:46.295246  701984 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 15:01:46.295266  701984 cache.go:232] Successfully downloaded all kic artifacts
	I1006 15:01:46.295293  701984 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 15:01:46.295349  701984 start.go:364] duration metric: took 37.555µs to acquireMachinesLock for "ha-481559"
	I1006 15:01:46.295367  701984 start.go:96] Skipping create...Using existing machine configuration
	I1006 15:01:46.295375  701984 fix.go:54] fixHost starting: 
	I1006 15:01:46.295587  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:46.312275  701984 fix.go:112] recreateIfNeeded on ha-481559: state=Stopped err=<nil>
	W1006 15:01:46.312302  701984 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 15:01:46.314002  701984 out.go:252] * Restarting existing docker container for "ha-481559" ...
	I1006 15:01:46.314062  701984 cli_runner.go:164] Run: docker start ha-481559
	I1006 15:01:46.546450  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:46.564424  701984 kic.go:430] container "ha-481559" state is running.
	I1006 15:01:46.564772  701984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:46.582786  701984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 15:01:46.582997  701984 machine.go:93] provisionDockerMachine start ...
	I1006 15:01:46.583078  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:46.601452  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:46.601724  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:46.601739  701984 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 15:01:46.602337  701984 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35090->127.0.0.1:32893: read: connection reset by peer
	I1006 15:01:49.745932  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 15:01:49.745960  701984 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 15:01:49.746042  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:49.763495  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:49.763769  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:49.763784  701984 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 15:01:49.916644  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 15:01:49.916725  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:49.934847  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:49.935071  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:49.935089  701984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 15:01:50.079011  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 15:01:50.079055  701984 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 15:01:50.079077  701984 ubuntu.go:190] setting up certificates
	I1006 15:01:50.079088  701984 provision.go:84] configureAuth start
	I1006 15:01:50.079141  701984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:50.096776  701984 provision.go:143] copyHostCerts
	I1006 15:01:50.096843  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 15:01:50.096887  701984 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 15:01:50.096924  701984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 15:01:50.097001  701984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 15:01:50.097123  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 15:01:50.097151  701984 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 15:01:50.097159  701984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 15:01:50.097230  701984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 15:01:50.097381  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 15:01:50.097413  701984 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 15:01:50.097420  701984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 15:01:50.097468  701984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 15:01:50.097549  701984 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 15:01:50.447800  701984 provision.go:177] copyRemoteCerts
	I1006 15:01:50.447874  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 15:01:50.447927  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:50.465959  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:50.568789  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 15:01:50.568870  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 15:01:50.586702  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 15:01:50.586774  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 15:01:50.604720  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 15:01:50.604808  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 15:01:50.622688  701984 provision.go:87] duration metric: took 543.582589ms to configureAuth
	I1006 15:01:50.622726  701984 ubuntu.go:206] setting minikube options for container-runtime
	I1006 15:01:50.622909  701984 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:50.623013  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:50.640864  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:50.641165  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:50.641193  701984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 15:01:50.900815  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 15:01:50.900843  701984 machine.go:96] duration metric: took 4.317828783s to provisionDockerMachine
	I1006 15:01:50.900853  701984 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 15:01:50.900863  701984 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 15:01:50.900923  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 15:01:50.900961  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:50.918547  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.021081  701984 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 15:01:51.024764  701984 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 15:01:51.024788  701984 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 15:01:51.024798  701984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 15:01:51.024843  701984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 15:01:51.024912  701984 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 15:01:51.024927  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 15:01:51.025019  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 15:01:51.032826  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 15:01:51.050602  701984 start.go:296] duration metric: took 149.73063ms for postStartSetup
	I1006 15:01:51.050696  701984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 15:01:51.050748  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:51.068484  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.167707  701984 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 15:01:51.172531  701984 fix.go:56] duration metric: took 4.877147401s for fixHost
	I1006 15:01:51.172561  701984 start.go:83] releasing machines lock for "ha-481559", held for 4.877200795s
	I1006 15:01:51.172636  701984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:51.190941  701984 ssh_runner.go:195] Run: cat /version.json
	I1006 15:01:51.191006  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:51.191054  701984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 15:01:51.191134  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:51.209128  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.209584  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.362495  701984 ssh_runner.go:195] Run: systemctl --version
	I1006 15:01:51.369363  701984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 15:01:51.404999  701984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 15:01:51.409958  701984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 15:01:51.410028  701984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 15:01:51.418138  701984 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 15:01:51.418168  701984 start.go:495] detecting cgroup driver to use...
	I1006 15:01:51.418201  701984 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 15:01:51.418264  701984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 15:01:51.432500  701984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 15:01:51.444740  701984 docker.go:218] disabling cri-docker service (if available) ...
	I1006 15:01:51.444799  701984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 15:01:51.459568  701984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 15:01:51.472638  701984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 15:01:51.548093  701984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 15:01:51.629502  701984 docker.go:234] disabling docker service ...
	I1006 15:01:51.629574  701984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 15:01:51.643687  701984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 15:01:51.656528  701984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 15:01:51.734011  701984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 15:01:51.812779  701984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 15:01:51.825167  701984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 15:01:51.839186  701984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 15:01:51.839274  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.848529  701984 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 15:01:51.848608  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.857415  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.866115  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.874826  701984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 15:01:51.882836  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.891797  701984 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.900171  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.908782  701984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 15:01:51.916072  701984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 15:01:51.923289  701984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:51.999114  701984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 15:01:52.103785  701984 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 15:01:52.103847  701984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 15:01:52.107845  701984 start.go:563] Will wait 60s for crictl version
	I1006 15:01:52.107895  701984 ssh_runner.go:195] Run: which crictl
	I1006 15:01:52.111706  701984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 15:01:52.137020  701984 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 15:01:52.137126  701984 ssh_runner.go:195] Run: crio --version
	I1006 15:01:52.166358  701984 ssh_runner.go:195] Run: crio --version
	I1006 15:01:52.197148  701984 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 15:01:52.198353  701984 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 15:01:52.216087  701984 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 15:01:52.220573  701984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 15:01:52.231278  701984 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 15:01:52.231400  701984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 15:01:52.231450  701984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 15:01:52.264781  701984 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 15:01:52.264801  701984 crio.go:433] Images already preloaded, skipping extraction
	I1006 15:01:52.264844  701984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 15:01:52.291584  701984 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 15:01:52.291607  701984 cache_images.go:85] Images are preloaded, skipping loading
	I1006 15:01:52.291614  701984 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 15:01:52.291708  701984 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 15:01:52.291770  701984 ssh_runner.go:195] Run: crio config
	I1006 15:01:52.338567  701984 cni.go:84] Creating CNI manager for ""
	I1006 15:01:52.338589  701984 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 15:01:52.338610  701984 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 15:01:52.338632  701984 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 15:01:52.338744  701984 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 15:01:52.338801  701984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 15:01:52.347483  701984 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 15:01:52.347568  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 15:01:52.355357  701984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 15:01:52.367896  701984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 15:01:52.380296  701984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 15:01:52.392680  701984 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 15:01:52.396473  701984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 15:01:52.406328  701984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:52.485101  701984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 15:01:52.514051  701984 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 15:01:52.514073  701984 certs.go:195] generating shared ca certs ...
	I1006 15:01:52.514090  701984 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:52.514284  701984 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 15:01:52.514339  701984 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 15:01:52.514355  701984 certs.go:257] generating profile certs ...
	I1006 15:01:52.514462  701984 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 15:01:52.514544  701984 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6
	I1006 15:01:52.514595  701984 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 15:01:52.514610  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 15:01:52.514629  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 15:01:52.514646  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 15:01:52.514666  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 15:01:52.514682  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 15:01:52.514731  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 15:01:52.514762  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 15:01:52.514780  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 15:01:52.514855  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 15:01:52.514898  701984 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 15:01:52.514911  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 15:01:52.514943  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 15:01:52.514975  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 15:01:52.515013  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 15:01:52.515066  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 15:01:52.515159  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.515184  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.515222  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.515850  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 15:01:52.536297  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 15:01:52.555790  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 15:01:52.575066  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 15:01:52.597425  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1006 15:01:52.616188  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 15:01:52.633992  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 15:01:52.651317  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 15:01:52.668942  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 15:01:52.685650  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 15:01:52.702738  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 15:01:52.720514  701984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 15:01:52.732781  701984 ssh_runner.go:195] Run: openssl version
	I1006 15:01:52.739000  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 15:01:52.747351  701984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.751001  701984 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.751062  701984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.785464  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 15:01:52.793884  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 15:01:52.802527  701984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.806287  701984 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.806346  701984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.839905  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 15:01:52.847950  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 15:01:52.856269  701984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.859833  701984 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.859889  701984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.893744  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 15:01:52.902397  701984 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 15:01:52.906224  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 15:01:52.940584  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 15:01:52.975121  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 15:01:53.010068  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 15:01:53.056395  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 15:01:53.098917  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 15:01:53.133146  701984 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 15:01:53.133293  701984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 15:01:53.133350  701984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 15:01:53.161765  701984 cri.go:89] found id: ""
	I1006 15:01:53.161834  701984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 15:01:53.169767  701984 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 15:01:53.169786  701984 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 15:01:53.169835  701984 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 15:01:53.177348  701984 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 15:01:53.177860  701984 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:01:53.178037  701984 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-626179/kubeconfig needs updating (will repair): [kubeconfig missing "ha-481559" cluster setting kubeconfig missing "ha-481559" context setting]
	I1006 15:01:53.178466  701984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:53.179258  701984 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 15:01:53.179749  701984 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 15:01:53.179781  701984 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 15:01:53.179788  701984 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 15:01:53.179794  701984 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 15:01:53.179789  701984 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1006 15:01:53.179801  701984 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 15:01:53.180239  701984 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 15:01:53.188398  701984 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1006 15:01:53.188432  701984 kubeadm.go:601] duration metric: took 18.640424ms to restartPrimaryControlPlane
	I1006 15:01:53.188443  701984 kubeadm.go:402] duration metric: took 55.31048ms to StartCluster
	I1006 15:01:53.188464  701984 settings.go:142] acquiring lock: {Name:mk49b10f71f24d1f54d5c453b3b04e717e9a9100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:53.188537  701984 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:01:53.189024  701984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:53.189291  701984 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 15:01:53.189351  701984 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 15:01:53.189450  701984 addons.go:69] Setting storage-provisioner=true in profile "ha-481559"
	I1006 15:01:53.189472  701984 addons.go:238] Setting addon storage-provisioner=true in "ha-481559"
	I1006 15:01:53.189480  701984 addons.go:69] Setting default-storageclass=true in profile "ha-481559"
	I1006 15:01:53.189497  701984 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-481559"
	I1006 15:01:53.189510  701984 host.go:66] Checking if "ha-481559" exists ...
	I1006 15:01:53.189548  701984 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:53.189835  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:53.190004  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:53.192670  701984 out.go:179] * Verifying Kubernetes components...
	I1006 15:01:53.193943  701984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:53.209649  701984 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 15:01:53.210039  701984 addons.go:238] Setting addon default-storageclass=true in "ha-481559"
	I1006 15:01:53.210089  701984 host.go:66] Checking if "ha-481559" exists ...
	I1006 15:01:53.210542  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:53.211200  701984 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 15:01:53.212531  701984 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:53.212549  701984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 15:01:53.212600  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:53.238402  701984 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 15:01:53.238430  701984 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 15:01:53.238493  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:53.240785  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:53.257980  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:53.293467  701984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 15:01:53.307364  701984 node_ready.go:35] waiting up to 6m0s for node "ha-481559" to be "Ready" ...
	I1006 15:01:53.350572  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:53.365695  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:53.407298  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.407342  701984 retry.go:31] will retry after 357.649421ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:53.420853  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.420888  701984 retry.go:31] will retry after 373.269917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.765311  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:53.794914  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:53.820162  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.820198  701984 retry.go:31] will retry after 560.850722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:53.849381  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.849415  701984 retry.go:31] will retry after 534.611771ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.381588  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:54.385156  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:54.438225  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.438264  701984 retry.go:31] will retry after 554.670785ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:54.439112  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.439133  701984 retry.go:31] will retry after 308.986378ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.748751  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:54.803407  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.803442  701984 retry.go:31] will retry after 474.547882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.993194  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:55.046254  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.046297  701984 retry.go:31] will retry after 677.664195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.278726  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:55.308628  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:55.332936  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.332970  701984 retry.go:31] will retry after 1.775881807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.724438  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:55.776937  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.776969  701984 retry.go:31] will retry after 843.878196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:56.621961  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:56.675428  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:56.675463  701984 retry.go:31] will retry after 1.450357982s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:57.109402  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:57.163276  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:57.163309  701984 retry.go:31] will retry after 2.464163888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:57.308897  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:01:58.126261  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:58.179363  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:58.179391  701984 retry.go:31] will retry after 3.126763455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:59.628619  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:59.681154  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:59.681190  701984 retry.go:31] will retry after 1.480440704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:59.808774  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:01.162599  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:01.216807  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:01.216851  701984 retry.go:31] will retry after 3.761635647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:01.307128  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:01.362791  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:01.362827  701984 retry.go:31] will retry after 3.177813602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:01.808826  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:04.308637  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:04.540904  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:04.594444  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:04.594481  701984 retry.go:31] will retry after 9.418537731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:04.979473  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:05.032152  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:05.032191  701984 retry.go:31] will retry after 8.203513024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:06.808141  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:08.808703  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:11.308146  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:13.236126  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:13.291139  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:13.291178  701984 retry.go:31] will retry after 13.734152969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:13.308624  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:14.013259  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:14.066927  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:14.066963  701984 retry.go:31] will retry after 4.968343953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:15.808091  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:17.808317  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:19.035709  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:19.089785  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:19.089821  701984 retry.go:31] will retry after 18.450329534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:19.808717  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:22.308279  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:24.808005  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:26.808376  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:27.025657  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:27.079430  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:27.079467  701984 retry.go:31] will retry after 18.308744233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:28.808528  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:31.308878  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:33.808327  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:35.808829  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:37.540393  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:37.593965  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:37.593995  701984 retry.go:31] will retry after 14.430254714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:38.308827  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:40.808189  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:42.808607  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:44.808693  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:45.388851  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:45.443913  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:45.443945  701984 retry.go:31] will retry after 30.607683046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:47.309012  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:49.808000  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:51.808101  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:52.024419  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:52.078859  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:52.078891  701984 retry.go:31] will retry after 32.375753443s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:53.808234  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:55.808746  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:58.308064  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:00.308503  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:02.808227  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:04.808951  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:07.308259  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:09.308723  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:11.808466  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:13.808963  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:03:16.052424  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:03:16.106854  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:03:16.106899  701984 retry.go:31] will retry after 23.781842061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:03:16.308055  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:18.308668  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:20.808285  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:22.808988  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:03:24.455485  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:03:24.509566  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:03:24.509687  701984 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1006 15:03:25.308449  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:27.308947  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:29.808772  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:32.308333  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:34.308810  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:36.808133  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:38.808620  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:03:39.889153  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:03:39.944329  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:03:39.944473  701984 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 15:03:39.946959  701984 out.go:179] * Enabled addons: 
	I1006 15:03:39.947914  701984 addons.go:514] duration metric: took 1m46.758571336s for enable addons: enabled=[]
	W1006 15:03:41.308834  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:43.808716  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:46.308473  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:48.808081  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:50.808732  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:52.809075  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:55.308499  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:57.308770  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:59.308964  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:01.808320  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:03.808672  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:05.808747  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:07.808918  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:10.307950  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:12.307991  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:14.308152  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:16.808061  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:19.307993  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:21.308090  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:23.308313  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:25.807982  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:27.808970  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:30.308966  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:32.807967  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:34.808007  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:36.808048  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:38.809015  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:41.308101  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:43.308272  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:45.308962  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:47.808271  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:50.308958  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:52.808017  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:54.808283  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:57.307946  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:59.309045  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:01.808138  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:03.808398  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:06.308174  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:08.808983  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:11.307996  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:13.308266  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:15.808972  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:18.308060  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:20.309001  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:22.807955  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:25.309026  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:27.808933  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:30.307944  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:32.308185  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:34.308727  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:36.808124  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:39.308015  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:41.308156  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:43.308548  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:45.308597  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:47.308993  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:49.809063  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:52.308161  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:54.308340  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:56.808315  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:58.808798  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:01.308198  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:03.807981  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:06.308060  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:08.807934  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:10.808929  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:13.308149  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:15.308997  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:17.808931  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:20.308951  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:22.807942  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:25.308953  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:27.807967  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:29.808934  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:32.307960  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:34.308089  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:36.308173  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:38.308890  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:40.808860  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:43.308107  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:45.808973  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:47.809038  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:50.308996  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:52.807974  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:54.808028  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:57.308950  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:59.808908  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:02.308088  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:04.308444  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:06.308749  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:08.808774  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:10.808934  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:13.308241  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:15.807927  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:17.808956  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:20.309035  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:22.808059  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:25.308026  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:27.809087  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:30.307981  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:32.308029  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:34.308474  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:36.308569  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:38.808502  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:41.308145  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:43.807946  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:45.808852  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:48.308766  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:50.808719  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:52.809004  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:07:53.308060  701984 node_ready.go:38] duration metric: took 6m0.000216007s for node "ha-481559" to be "Ready" ...
	I1006 15:07:53.311054  701984 out.go:203] 
	W1006 15:07:53.312196  701984 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1006 15:07:53.312219  701984 out.go:285] * 
	W1006 15:07:53.313838  701984 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 15:07:53.315023  701984 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.629237885Z" level=info msg="createCtr: removing container 7cccc243360d2822c57ef267495d4ba2f52ac7d1a172de4f7bf86c2782752b95" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.629267049Z" level=info msg="createCtr: deleting container 7cccc243360d2822c57ef267495d4ba2f52ac7d1a172de4f7bf86c2782752b95 from storage" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.631063814Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-481559_kube-system_b4e1cca8a09d3789a7e0862428dfe0db_0" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.6056339Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=9d4ff92e-10b7-4cbd-a66f-12aec986be76 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.606711328Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=5d809907-2803-4774-bb2b-994147e1fe9e name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.607814333Z" level=info msg="Creating container: kube-system/etcd-ha-481559/etcd" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.608120994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.6122724Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.612720273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.635848688Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.637556379Z" level=info msg="createCtr: deleting container ID 5010fd13ff74bfb6cd5c840a91b2b7c210c7ea5032b47c702543d7ccf65b7d27 from idIndex" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.637596029Z" level=info msg="createCtr: removing container 5010fd13ff74bfb6cd5c840a91b2b7c210c7ea5032b47c702543d7ccf65b7d27" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.637629866Z" level=info msg="createCtr: deleting container 5010fd13ff74bfb6cd5c840a91b2b7c210c7ea5032b47c702543d7ccf65b7d27 from storage" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.643172574Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.604978083Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=4b7667da-134d-4f28-bd8d-31229cd456f4 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.605937787Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=956799c7-1c22-4dc4-900e-38eba06bdefd name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.607319223Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-481559/kube-controller-manager" id=ab1fdfe1-8a97-4d51-ba07-1d0a82a889be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.607686889Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.611466761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.612023975Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.629132287Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ab1fdfe1-8a97-4d51-ba07-1d0a82a889be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.630490666Z" level=info msg="createCtr: deleting container ID 36de4b88c8668bb73c2de85cd7883c07eb58758c86ac1d4b845c347ca19e50c5 from idIndex" id=ab1fdfe1-8a97-4d51-ba07-1d0a82a889be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.630529753Z" level=info msg="createCtr: removing container 36de4b88c8668bb73c2de85cd7883c07eb58758c86ac1d4b845c347ca19e50c5" id=ab1fdfe1-8a97-4d51-ba07-1d0a82a889be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.630560403Z" level=info msg="createCtr: deleting container 36de4b88c8668bb73c2de85cd7883c07eb58758c86ac1d4b845c347ca19e50c5 from storage" id=ab1fdfe1-8a97-4d51-ba07-1d0a82a889be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.632720397Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-481559_kube-system_5f3181798721fe8691d871f051785efc_0" id=ab1fdfe1-8a97-4d51-ba07-1d0a82a889be name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 15:07:57.284955    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:57.285566    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:57.287140    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:57.287556    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:57.289071    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 15:07:57 up  5:50,  0 user,  load average: 0.08, 0.04, 0.09
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 15:07:48 ha-481559 kubelet[675]: E1006 15:07:48.196145     675 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 06 15:07:48 ha-481559 kubelet[675]: E1006 15:07:48.248269     675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 15:07:48 ha-481559 kubelet[675]: I1006 15:07:48.419061     675 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 15:07:48 ha-481559 kubelet[675]: E1006 15:07:48.419541     675 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 15:07:51 ha-481559 kubelet[675]: E1006 15:07:51.129592     675 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bef0b9dfc36de  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 15:01:52.592541406 +0000 UTC m=+0.076229606,LastTimestamp:2025-10-06 15:01:52.592541406 +0000 UTC m=+0.076229606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 15:07:52 ha-481559 kubelet[675]: E1006 15:07:52.618976     675 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.605090     675 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.643581     675 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:07:53 ha-481559 kubelet[675]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:53 ha-481559 kubelet[675]:  > podSandboxID="2509df0fbb37ea26e7c4176db5318bb5b7bb232dde96912d6badc3737828a2f0"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.643723     675 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:07:53 ha-481559 kubelet[675]:         container etcd start failed in pod etcd-ha-481559_kube-system(520c6060936b1c2aac479c99ed6c0355): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:53 ha-481559 kubelet[675]:  > logger="UnhandledError"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.643767     675 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	Oct 06 15:07:55 ha-481559 kubelet[675]: E1006 15:07:55.248957     675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 15:07:55 ha-481559 kubelet[675]: I1006 15:07:55.421634     675 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 15:07:55 ha-481559 kubelet[675]: E1006 15:07:55.422098     675 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 15:07:56 ha-481559 kubelet[675]: E1006 15:07:56.604521     675 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 15:07:56 ha-481559 kubelet[675]: E1006 15:07:56.632984     675 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:07:56 ha-481559 kubelet[675]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:56 ha-481559 kubelet[675]:  > podSandboxID="71ebb15c15a076b5e8bbeb63b220e7169b705d9032094ef5e3823a2eacc0feef"
	Oct 06 15:07:56 ha-481559 kubelet[675]: E1006 15:07:56.633080     675 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:07:56 ha-481559 kubelet[675]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-481559_kube-system(5f3181798721fe8691d871f051785efc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:56 ha-481559 kubelet[675]:  > logger="UnhandledError"
	Oct 06 15:07:56 ha-481559 kubelet[675]: E1006 15:07:56.633109     675 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-481559" podUID="5f3181798721fe8691d871f051785efc"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 2 (293.743324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-481559" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-481559\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-481559\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-481559\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-481559" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-481559\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-481559\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-481559\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --o
utput json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-481559
helpers_test.go:243: (dbg) docker inspect ha-481559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	        "Created": "2025-10-06T14:44:39.623616791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 702186,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T15:01:46.338559643Z",
	            "FinishedAt": "2025-10-06T15:01:45.038433314Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/hosts",
	        "LogPath": "/var/lib/docker/containers/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0/8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0-json.log",
	        "Name": "/ha-481559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-481559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-481559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b017d29b6b188c11217460aa328e959f3cceef4aaac68c0efbe9c3f356f27b0",
	                "LowerDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/merged",
	                "UpperDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/diff",
	                "WorkDir": "/var/lib/docker/overlay2/764c4d13032cea1c981a1513641378a4e309e5bc01b5356c6fecba1c3e0e2311/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-481559",
	                "Source": "/var/lib/docker/volumes/ha-481559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-481559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-481559",
	                "name.minikube.sigs.k8s.io": "ha-481559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96ad0a0c00ce1e2fd1255251fdbe6e26beae966a5054a86bbea20c89f537c09f",
	            "SandboxKey": "/var/run/docker/netns/96ad0a0c00ce",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32893"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-481559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:92:da:5b:3d:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be549c6a1ae4457d4629d9a7f86cde88021333ee0af8bb7a740b008115c43dde",
	                    "EndpointID": "c5dcb77b8e9feae93629ab92a205600e06ab65076f80e1ea27e6fbc473fcf4ef",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-481559",
	                        "8b017d29b6b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-481559 -n ha-481559: exit status 2 (279.617535ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-481559 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:52 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:53 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ kubectl │ ha-481559 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node add --alsologtostderr -v 5                                                    │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node stop m02 --alsologtostderr -v 5                                               │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node start m02 --alsologtostderr -v 5                                              │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:54 UTC │                     │
	│ node    │ ha-481559 node list --alsologtostderr -v 5                                                   │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │                     │
	│ stop    │ ha-481559 stop --alsologtostderr -v 5                                                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │ 06 Oct 25 14:55 UTC │
	│ start   │ ha-481559 start --wait true --alsologtostderr -v 5                                           │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 14:55 UTC │                     │
	│ node    │ ha-481559 node list --alsologtostderr -v 5                                                   │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	│ node    │ ha-481559 node delete m03 --alsologtostderr -v 5                                             │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	│ stop    │ ha-481559 stop --alsologtostderr -v 5                                                        │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │ 06 Oct 25 15:01 UTC │
	│ start   │ ha-481559 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	│ node    │ ha-481559 node add --control-plane --alsologtostderr -v 5                                    │ ha-481559 │ jenkins │ v1.37.0 │ 06 Oct 25 15:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 15:01:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 15:01:46.116187  701984 out.go:360] Setting OutFile to fd 1 ...
	I1006 15:01:46.116327  701984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:46.116336  701984 out.go:374] Setting ErrFile to fd 2...
	I1006 15:01:46.116340  701984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:01:46.116564  701984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 15:01:46.116989  701984 out.go:368] Setting JSON to false
	I1006 15:01:46.117973  701984 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":20642,"bootTime":1759742264,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 15:01:46.118071  701984 start.go:140] virtualization: kvm guest
	I1006 15:01:46.119930  701984 out.go:179] * [ha-481559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 15:01:46.121071  701984 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 15:01:46.121071  701984 notify.go:220] Checking for updates...
	I1006 15:01:46.123063  701984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 15:01:46.124433  701984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:01:46.125406  701984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 15:01:46.126304  701984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 15:01:46.127330  701984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 15:01:46.128989  701984 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:46.129680  701984 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 15:01:46.153833  701984 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 15:01:46.153923  701984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:01:46.210040  701984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 15:01:46.200236285 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 15:01:46.210147  701984 docker.go:318] overlay module found
	I1006 15:01:46.211692  701984 out.go:179] * Using the docker driver based on existing profile
	I1006 15:01:46.212596  701984 start.go:304] selected driver: docker
	I1006 15:01:46.212612  701984 start.go:924] validating driver "docker" against &{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 15:01:46.212693  701984 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 15:01:46.212776  701984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:01:46.269605  701984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 15:01:46.258876471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 15:01:46.270302  701984 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 15:01:46.270329  701984 cni.go:84] Creating CNI manager for ""
	I1006 15:01:46.270373  701984 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 15:01:46.270419  701984 start.go:348] cluster config:
	{Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1006 15:01:46.272125  701984 out.go:179] * Starting "ha-481559" primary control-plane node in "ha-481559" cluster
	I1006 15:01:46.273048  701984 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 15:01:46.274095  701984 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 15:01:46.274969  701984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 15:01:46.275001  701984 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 15:01:46.275010  701984 cache.go:58] Caching tarball of preloaded images
	I1006 15:01:46.275079  701984 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 15:01:46.275089  701984 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 15:01:46.275081  701984 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 15:01:46.275176  701984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 15:01:46.295225  701984 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 15:01:46.295246  701984 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 15:01:46.295266  701984 cache.go:232] Successfully downloaded all kic artifacts
	I1006 15:01:46.295293  701984 start.go:360] acquireMachinesLock for ha-481559: {Name:mk240cd185ab39e9e4d3fa7c476aea5736cb5b11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 15:01:46.295349  701984 start.go:364] duration metric: took 37.555µs to acquireMachinesLock for "ha-481559"
	I1006 15:01:46.295367  701984 start.go:96] Skipping create...Using existing machine configuration
	I1006 15:01:46.295375  701984 fix.go:54] fixHost starting: 
	I1006 15:01:46.295587  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:46.312275  701984 fix.go:112] recreateIfNeeded on ha-481559: state=Stopped err=<nil>
	W1006 15:01:46.312302  701984 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 15:01:46.314002  701984 out.go:252] * Restarting existing docker container for "ha-481559" ...
	I1006 15:01:46.314062  701984 cli_runner.go:164] Run: docker start ha-481559
	I1006 15:01:46.546450  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:46.564424  701984 kic.go:430] container "ha-481559" state is running.
	I1006 15:01:46.564772  701984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:46.582786  701984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/config.json ...
	I1006 15:01:46.582997  701984 machine.go:93] provisionDockerMachine start ...
	I1006 15:01:46.583078  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:46.601452  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:46.601724  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:46.601739  701984 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 15:01:46.602337  701984 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35090->127.0.0.1:32893: read: connection reset by peer
	I1006 15:01:49.745932  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 15:01:49.745960  701984 ubuntu.go:182] provisioning hostname "ha-481559"
	I1006 15:01:49.746042  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:49.763495  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:49.763769  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:49.763784  701984 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-481559 && echo "ha-481559" | sudo tee /etc/hostname
	I1006 15:01:49.916644  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-481559
	
	I1006 15:01:49.916725  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:49.934847  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:49.935071  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:49.935089  701984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-481559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-481559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-481559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 15:01:50.079011  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 15:01:50.079055  701984 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 15:01:50.079077  701984 ubuntu.go:190] setting up certificates
	I1006 15:01:50.079088  701984 provision.go:84] configureAuth start
	I1006 15:01:50.079141  701984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:50.096776  701984 provision.go:143] copyHostCerts
	I1006 15:01:50.096843  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 15:01:50.096887  701984 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 15:01:50.096924  701984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 15:01:50.097001  701984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 15:01:50.097123  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 15:01:50.097151  701984 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 15:01:50.097159  701984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 15:01:50.097230  701984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 15:01:50.097381  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 15:01:50.097413  701984 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 15:01:50.097420  701984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 15:01:50.097468  701984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 15:01:50.097549  701984 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.ha-481559 san=[127.0.0.1 192.168.49.2 ha-481559 localhost minikube]
	I1006 15:01:50.447800  701984 provision.go:177] copyRemoteCerts
	I1006 15:01:50.447874  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 15:01:50.447927  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:50.465959  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:50.568789  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 15:01:50.568870  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 15:01:50.586702  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 15:01:50.586774  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1006 15:01:50.604720  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 15:01:50.604808  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 15:01:50.622688  701984 provision.go:87] duration metric: took 543.582589ms to configureAuth
	I1006 15:01:50.622726  701984 ubuntu.go:206] setting minikube options for container-runtime
	I1006 15:01:50.622909  701984 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:50.623013  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:50.640864  701984 main.go:141] libmachine: Using SSH client type: native
	I1006 15:01:50.641165  701984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32893 <nil> <nil>}
	I1006 15:01:50.641193  701984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 15:01:50.900815  701984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 15:01:50.900843  701984 machine.go:96] duration metric: took 4.317828783s to provisionDockerMachine
	I1006 15:01:50.900853  701984 start.go:293] postStartSetup for "ha-481559" (driver="docker")
	I1006 15:01:50.900863  701984 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 15:01:50.900923  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 15:01:50.900961  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:50.918547  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.021081  701984 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 15:01:51.024764  701984 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 15:01:51.024788  701984 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 15:01:51.024798  701984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 15:01:51.024843  701984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 15:01:51.024912  701984 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 15:01:51.024927  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /etc/ssl/certs/6297192.pem
	I1006 15:01:51.025019  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 15:01:51.032826  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 15:01:51.050602  701984 start.go:296] duration metric: took 149.73063ms for postStartSetup
	I1006 15:01:51.050696  701984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 15:01:51.050748  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:51.068484  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.167707  701984 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 15:01:51.172531  701984 fix.go:56] duration metric: took 4.877147401s for fixHost
	I1006 15:01:51.172561  701984 start.go:83] releasing machines lock for "ha-481559", held for 4.877200795s
	I1006 15:01:51.172636  701984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481559
	I1006 15:01:51.190941  701984 ssh_runner.go:195] Run: cat /version.json
	I1006 15:01:51.191006  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:51.191054  701984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 15:01:51.191134  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:51.209128  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.209584  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:51.362495  701984 ssh_runner.go:195] Run: systemctl --version
	I1006 15:01:51.369363  701984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 15:01:51.404999  701984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 15:01:51.409958  701984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 15:01:51.410028  701984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 15:01:51.418138  701984 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 15:01:51.418168  701984 start.go:495] detecting cgroup driver to use...
	I1006 15:01:51.418201  701984 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 15:01:51.418264  701984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 15:01:51.432500  701984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 15:01:51.444740  701984 docker.go:218] disabling cri-docker service (if available) ...
	I1006 15:01:51.444799  701984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 15:01:51.459568  701984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 15:01:51.472638  701984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 15:01:51.548093  701984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 15:01:51.629502  701984 docker.go:234] disabling docker service ...
	I1006 15:01:51.629574  701984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 15:01:51.643687  701984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 15:01:51.656528  701984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 15:01:51.734011  701984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 15:01:51.812779  701984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 15:01:51.825167  701984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 15:01:51.839186  701984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 15:01:51.839274  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.848529  701984 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 15:01:51.848608  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.857415  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.866115  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.874826  701984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 15:01:51.882836  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.891797  701984 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.900171  701984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:01:51.908782  701984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 15:01:51.916072  701984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 15:01:51.923289  701984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:51.999114  701984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 15:01:52.103785  701984 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 15:01:52.103847  701984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 15:01:52.107845  701984 start.go:563] Will wait 60s for crictl version
	I1006 15:01:52.107895  701984 ssh_runner.go:195] Run: which crictl
	I1006 15:01:52.111706  701984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 15:01:52.137020  701984 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 15:01:52.137126  701984 ssh_runner.go:195] Run: crio --version
	I1006 15:01:52.166358  701984 ssh_runner.go:195] Run: crio --version
	I1006 15:01:52.197148  701984 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 15:01:52.198353  701984 cli_runner.go:164] Run: docker network inspect ha-481559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 15:01:52.216087  701984 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 15:01:52.220573  701984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 15:01:52.231278  701984 kubeadm.go:883] updating cluster {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 15:01:52.231400  701984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 15:01:52.231450  701984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 15:01:52.264781  701984 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 15:01:52.264801  701984 crio.go:433] Images already preloaded, skipping extraction
	I1006 15:01:52.264844  701984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 15:01:52.291584  701984 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 15:01:52.291607  701984 cache_images.go:85] Images are preloaded, skipping loading
	I1006 15:01:52.291614  701984 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 15:01:52.291708  701984 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-481559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 15:01:52.291770  701984 ssh_runner.go:195] Run: crio config
	I1006 15:01:52.338567  701984 cni.go:84] Creating CNI manager for ""
	I1006 15:01:52.338589  701984 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1006 15:01:52.338610  701984 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 15:01:52.338632  701984 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-481559 NodeName:ha-481559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 15:01:52.338744  701984 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-481559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 15:01:52.338801  701984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 15:01:52.347483  701984 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 15:01:52.347568  701984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 15:01:52.355357  701984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1006 15:01:52.367896  701984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 15:01:52.380296  701984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1006 15:01:52.392680  701984 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 15:01:52.396473  701984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 15:01:52.406328  701984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:52.485101  701984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 15:01:52.514051  701984 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559 for IP: 192.168.49.2
	I1006 15:01:52.514073  701984 certs.go:195] generating shared ca certs ...
	I1006 15:01:52.514090  701984 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:52.514284  701984 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 15:01:52.514339  701984 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 15:01:52.514355  701984 certs.go:257] generating profile certs ...
	I1006 15:01:52.514462  701984 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key
	I1006 15:01:52.514544  701984 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key.ac196ca6
	I1006 15:01:52.514595  701984 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key
	I1006 15:01:52.514610  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 15:01:52.514629  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 15:01:52.514646  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 15:01:52.514666  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 15:01:52.514682  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 15:01:52.514731  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 15:01:52.514762  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 15:01:52.514780  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 15:01:52.514855  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 15:01:52.514898  701984 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 15:01:52.514911  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 15:01:52.514943  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 15:01:52.514975  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 15:01:52.515013  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 15:01:52.515066  701984 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 15:01:52.515159  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem -> /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.515184  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.515222  701984 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.515850  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 15:01:52.536297  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 15:01:52.555790  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 15:01:52.575066  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 15:01:52.597425  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1006 15:01:52.616188  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 15:01:52.633992  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 15:01:52.651317  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 15:01:52.668942  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 15:01:52.685650  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 15:01:52.702738  701984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 15:01:52.720514  701984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 15:01:52.732781  701984 ssh_runner.go:195] Run: openssl version
	I1006 15:01:52.739000  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 15:01:52.747351  701984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.751001  701984 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.751062  701984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 15:01:52.785464  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 15:01:52.793884  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 15:01:52.802527  701984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.806287  701984 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.806346  701984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 15:01:52.839905  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 15:01:52.847950  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 15:01:52.856269  701984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.859833  701984 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.859889  701984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:01:52.893744  701984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 15:01:52.902397  701984 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 15:01:52.906224  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 15:01:52.940584  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 15:01:52.975121  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 15:01:53.010068  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 15:01:53.056395  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 15:01:53.098917  701984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 15:01:53.133146  701984 kubeadm.go:400] StartCluster: {Name:ha-481559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-481559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 15:01:53.133293  701984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 15:01:53.133350  701984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 15:01:53.161765  701984 cri.go:89] found id: ""
	I1006 15:01:53.161834  701984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 15:01:53.169767  701984 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 15:01:53.169786  701984 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 15:01:53.169835  701984 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 15:01:53.177348  701984 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 15:01:53.177860  701984 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-481559" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:01:53.178037  701984 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-626179/kubeconfig needs updating (will repair): [kubeconfig missing "ha-481559" cluster setting kubeconfig missing "ha-481559" context setting]
	I1006 15:01:53.178466  701984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:53.179258  701984 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 15:01:53.179749  701984 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 15:01:53.179781  701984 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 15:01:53.179788  701984 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 15:01:53.179794  701984 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 15:01:53.179789  701984 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1006 15:01:53.179801  701984 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 15:01:53.180239  701984 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 15:01:53.188398  701984 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1006 15:01:53.188432  701984 kubeadm.go:601] duration metric: took 18.640424ms to restartPrimaryControlPlane
	I1006 15:01:53.188443  701984 kubeadm.go:402] duration metric: took 55.31048ms to StartCluster
	I1006 15:01:53.188464  701984 settings.go:142] acquiring lock: {Name:mk49b10f71f24d1f54d5c453b3b04e717e9a9100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:53.188537  701984 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:01:53.189024  701984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/kubeconfig: {Name:mke84a74c9d22714f21826744ac414fa621492d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:01:53.189291  701984 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 15:01:53.189351  701984 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 15:01:53.189450  701984 addons.go:69] Setting storage-provisioner=true in profile "ha-481559"
	I1006 15:01:53.189472  701984 addons.go:238] Setting addon storage-provisioner=true in "ha-481559"
	I1006 15:01:53.189480  701984 addons.go:69] Setting default-storageclass=true in profile "ha-481559"
	I1006 15:01:53.189497  701984 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-481559"
	I1006 15:01:53.189510  701984 host.go:66] Checking if "ha-481559" exists ...
	I1006 15:01:53.189548  701984 config.go:182] Loaded profile config "ha-481559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:01:53.189835  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:53.190004  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:53.192670  701984 out.go:179] * Verifying Kubernetes components...
	I1006 15:01:53.193943  701984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:01:53.209649  701984 kapi.go:59] client config for ha-481559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/profiles/ha-481559/client.key", CAFile:"/home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 15:01:53.210039  701984 addons.go:238] Setting addon default-storageclass=true in "ha-481559"
	I1006 15:01:53.210089  701984 host.go:66] Checking if "ha-481559" exists ...
	I1006 15:01:53.210542  701984 cli_runner.go:164] Run: docker container inspect ha-481559 --format={{.State.Status}}
	I1006 15:01:53.211200  701984 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 15:01:53.212531  701984 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:53.212549  701984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 15:01:53.212600  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:53.238402  701984 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 15:01:53.238430  701984 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 15:01:53.238493  701984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481559
	I1006 15:01:53.240785  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:53.257980  701984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32893 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/ha-481559/id_rsa Username:docker}
	I1006 15:01:53.293467  701984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 15:01:53.307364  701984 node_ready.go:35] waiting up to 6m0s for node "ha-481559" to be "Ready" ...
	I1006 15:01:53.350572  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:53.365695  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:53.407298  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.407342  701984 retry.go:31] will retry after 357.649421ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:53.420853  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.420888  701984 retry.go:31] will retry after 373.269917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.765311  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:53.794914  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:53.820162  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.820198  701984 retry.go:31] will retry after 560.850722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:53.849381  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:53.849415  701984 retry.go:31] will retry after 534.611771ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.381588  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 15:01:54.385156  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:54.438225  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.438264  701984 retry.go:31] will retry after 554.670785ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:54.439112  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.439133  701984 retry.go:31] will retry after 308.986378ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.748751  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:54.803407  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.803442  701984 retry.go:31] will retry after 474.547882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:54.993194  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:55.046254  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.046297  701984 retry.go:31] will retry after 677.664195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.278726  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:55.308628  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:01:55.332936  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.332970  701984 retry.go:31] will retry after 1.775881807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.724438  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:55.776937  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:55.776969  701984 retry.go:31] will retry after 843.878196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:56.621961  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:56.675428  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:56.675463  701984 retry.go:31] will retry after 1.450357982s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:57.109402  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:57.163276  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:57.163309  701984 retry.go:31] will retry after 2.464163888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:57.308897  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:01:58.126261  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:01:58.179363  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:58.179391  701984 retry.go:31] will retry after 3.126763455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:59.628619  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:01:59.681154  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:01:59.681190  701984 retry.go:31] will retry after 1.480440704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:01:59.808774  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:01.162599  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:01.216807  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:01.216851  701984 retry.go:31] will retry after 3.761635647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:01.307128  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:01.362791  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:01.362827  701984 retry.go:31] will retry after 3.177813602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:01.808826  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:04.308637  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:04.540904  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:04.594444  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:04.594481  701984 retry.go:31] will retry after 9.418537731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:04.979473  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:05.032152  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:05.032191  701984 retry.go:31] will retry after 8.203513024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:06.808141  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:08.808703  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:11.308146  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:13.236126  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:13.291139  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:13.291178  701984 retry.go:31] will retry after 13.734152969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:13.308624  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:14.013259  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:14.066927  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:14.066963  701984 retry.go:31] will retry after 4.968343953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:15.808091  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:17.808317  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:19.035709  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:19.089785  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:19.089821  701984 retry.go:31] will retry after 18.450329534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:19.808717  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:22.308279  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:24.808005  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:26.808376  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:27.025657  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:27.079430  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:27.079467  701984 retry.go:31] will retry after 18.308744233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:28.808528  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:31.308878  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:33.808327  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:35.808829  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:37.540393  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:37.593965  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:37.593995  701984 retry.go:31] will retry after 14.430254714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:38.308827  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:40.808189  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:42.808607  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:44.808693  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:45.388851  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:02:45.443913  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:45.443945  701984 retry.go:31] will retry after 30.607683046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:47.309012  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:49.808000  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:51.808101  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:02:52.024419  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:02:52.078859  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:02:52.078891  701984 retry.go:31] will retry after 32.375753443s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:02:53.808234  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:55.808746  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:02:58.308064  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:00.308503  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:02.808227  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:04.808951  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:07.308259  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:09.308723  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:11.808466  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:13.808963  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:03:16.052424  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:03:16.106854  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 15:03:16.106899  701984 retry.go:31] will retry after 23.781842061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:03:16.308055  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:18.308668  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:20.808285  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:22.808988  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:03:24.455485  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1006 15:03:24.509566  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:03:24.509687  701984 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1006 15:03:25.308449  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:27.308947  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:29.808772  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:32.308333  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:34.308810  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:36.808133  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:38.808620  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:03:39.889153  701984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1006 15:03:39.944329  701984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 15:03:39.944473  701984 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 15:03:39.946959  701984 out.go:179] * Enabled addons: 
	I1006 15:03:39.947914  701984 addons.go:514] duration metric: took 1m46.758571336s for enable addons: enabled=[]
	W1006 15:03:41.308834  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:43.808716  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:46.308473  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:48.808081  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:50.808732  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:52.809075  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:55.308499  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:57.308770  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:03:59.308964  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:01.808320  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:03.808672  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:05.808747  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:07.808918  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:10.307950  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:12.307991  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:14.308152  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:16.808061  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:19.307993  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:21.308090  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:23.308313  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:25.807982  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:27.808970  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:30.308966  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:32.807967  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:34.808007  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:36.808048  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:38.809015  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:41.308101  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:43.308272  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:45.308962  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:47.808271  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:50.308958  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:52.808017  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:54.808283  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:57.307946  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:04:59.309045  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:01.808138  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:03.808398  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:06.308174  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:08.808983  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:11.307996  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:13.308266  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:15.808972  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:18.308060  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:20.309001  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:22.807955  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:25.309026  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:27.808933  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:30.307944  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:32.308185  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:34.308727  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:36.808124  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:39.308015  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:41.308156  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:43.308548  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:45.308597  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:47.308993  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:49.809063  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:52.308161  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:54.308340  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:56.808315  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:05:58.808798  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:01.308198  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:03.807981  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:06.308060  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:08.807934  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:10.808929  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:13.308149  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:15.308997  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:17.808931  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:20.308951  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:22.807942  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:25.308953  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:27.807967  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:29.808934  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:32.307960  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:34.308089  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:36.308173  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:38.308890  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:40.808860  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:43.308107  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:45.808973  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:47.809038  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:50.308996  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:52.807974  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:54.808028  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:57.308950  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:06:59.808908  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:02.308088  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:04.308444  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:06.308749  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:08.808774  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:10.808934  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:13.308241  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:15.807927  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:17.808956  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:20.309035  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:22.808059  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:25.308026  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:27.809087  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:30.307981  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:32.308029  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:34.308474  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:36.308569  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:38.808502  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:41.308145  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:43.807946  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:45.808852  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:48.308766  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:50.808719  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	W1006 15:07:52.809004  701984 node_ready.go:55] error getting node "ha-481559" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-481559": dial tcp 192.168.49.2:8443: connect: connection refused
	I1006 15:07:53.308060  701984 node_ready.go:38] duration metric: took 6m0.000216007s for node "ha-481559" to be "Ready" ...
	I1006 15:07:53.311054  701984 out.go:203] 
	W1006 15:07:53.312196  701984 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1006 15:07:53.312219  701984 out.go:285] * 
	W1006 15:07:53.313838  701984 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 15:07:53.315023  701984 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.629237885Z" level=info msg="createCtr: removing container 7cccc243360d2822c57ef267495d4ba2f52ac7d1a172de4f7bf86c2782752b95" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.629267049Z" level=info msg="createCtr: deleting container 7cccc243360d2822c57ef267495d4ba2f52ac7d1a172de4f7bf86c2782752b95 from storage" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:46 ha-481559 crio[519]: time="2025-10-06T15:07:46.631063814Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-481559_kube-system_b4e1cca8a09d3789a7e0862428dfe0db_0" id=94453879-139c-429c-a5b1-5ee37a0899b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.6056339Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=9d4ff92e-10b7-4cbd-a66f-12aec986be76 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.606711328Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=5d809907-2803-4774-bb2b-994147e1fe9e name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.607814333Z" level=info msg="Creating container: kube-system/etcd-ha-481559/etcd" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.608120994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.6122724Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.612720273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.635848688Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.637556379Z" level=info msg="createCtr: deleting container ID 5010fd13ff74bfb6cd5c840a91b2b7c210c7ea5032b47c702543d7ccf65b7d27 from idIndex" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.637596029Z" level=info msg="createCtr: removing container 5010fd13ff74bfb6cd5c840a91b2b7c210c7ea5032b47c702543d7ccf65b7d27" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.637629866Z" level=info msg="createCtr: deleting container 5010fd13ff74bfb6cd5c840a91b2b7c210c7ea5032b47c702543d7ccf65b7d27 from storage" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:53 ha-481559 crio[519]: time="2025-10-06T15:07:53.643172574Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-481559_kube-system_520c6060936b1c2aac479c99ed6c0355_0" id=a0a68d32-290d-475f-98e2-039b9e340155 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.604978083Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=4b7667da-134d-4f28-bd8d-31229cd456f4 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.605937787Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=956799c7-1c22-4dc4-900e-38eba06bdefd name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.607319223Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-481559/kube-controller-manager" id=ab1fdfe1-8a97-4d51-ba07-1d0a82a889be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.607686889Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.611466761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.612023975Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.629132287Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ab1fdfe1-8a97-4d51-ba07-1d0a82a889be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.630490666Z" level=info msg="createCtr: deleting container ID 36de4b88c8668bb73c2de85cd7883c07eb58758c86ac1d4b845c347ca19e50c5 from idIndex" id=ab1fdfe1-8a97-4d51-ba07-1d0a82a889be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.630529753Z" level=info msg="createCtr: removing container 36de4b88c8668bb73c2de85cd7883c07eb58758c86ac1d4b845c347ca19e50c5" id=ab1fdfe1-8a97-4d51-ba07-1d0a82a889be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.630560403Z" level=info msg="createCtr: deleting container 36de4b88c8668bb73c2de85cd7883c07eb58758c86ac1d4b845c347ca19e50c5 from storage" id=ab1fdfe1-8a97-4d51-ba07-1d0a82a889be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:07:56 ha-481559 crio[519]: time="2025-10-06T15:07:56.632720397Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-481559_kube-system_5f3181798721fe8691d871f051785efc_0" id=ab1fdfe1-8a97-4d51-ba07-1d0a82a889be name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 15:07:58.817050    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:58.817527    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:58.819090    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:58.819552    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:07:58.821065    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 15:07:58 up  5:50,  0 user,  load average: 0.08, 0.04, 0.09
	Linux ha-481559 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 15:07:48 ha-481559 kubelet[675]: E1006 15:07:48.248269     675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 15:07:48 ha-481559 kubelet[675]: I1006 15:07:48.419061     675 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 15:07:48 ha-481559 kubelet[675]: E1006 15:07:48.419541     675 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 15:07:51 ha-481559 kubelet[675]: E1006 15:07:51.129592     675 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-481559.186bef0b9dfc36de  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-481559,UID:ha-481559,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-481559 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-481559,},FirstTimestamp:2025-10-06 15:01:52.592541406 +0000 UTC m=+0.076229606,LastTimestamp:2025-10-06 15:01:52.592541406 +0000 UTC m=+0.076229606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-481559,}"
	Oct 06 15:07:52 ha-481559 kubelet[675]: E1006 15:07:52.618976     675 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-481559\" not found"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.605090     675 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.643581     675 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:07:53 ha-481559 kubelet[675]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:53 ha-481559 kubelet[675]:  > podSandboxID="2509df0fbb37ea26e7c4176db5318bb5b7bb232dde96912d6badc3737828a2f0"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.643723     675 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:07:53 ha-481559 kubelet[675]:         container etcd start failed in pod etcd-ha-481559_kube-system(520c6060936b1c2aac479c99ed6c0355): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:53 ha-481559 kubelet[675]:  > logger="UnhandledError"
	Oct 06 15:07:53 ha-481559 kubelet[675]: E1006 15:07:53.643767     675 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-481559" podUID="520c6060936b1c2aac479c99ed6c0355"
	Oct 06 15:07:55 ha-481559 kubelet[675]: E1006 15:07:55.248957     675 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-481559?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 06 15:07:55 ha-481559 kubelet[675]: I1006 15:07:55.421634     675 kubelet_node_status.go:75] "Attempting to register node" node="ha-481559"
	Oct 06 15:07:55 ha-481559 kubelet[675]: E1006 15:07:55.422098     675 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-481559"
	Oct 06 15:07:56 ha-481559 kubelet[675]: E1006 15:07:56.604521     675 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-481559\" not found" node="ha-481559"
	Oct 06 15:07:56 ha-481559 kubelet[675]: E1006 15:07:56.632984     675 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:07:56 ha-481559 kubelet[675]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:56 ha-481559 kubelet[675]:  > podSandboxID="71ebb15c15a076b5e8bbeb63b220e7169b705d9032094ef5e3823a2eacc0feef"
	Oct 06 15:07:56 ha-481559 kubelet[675]: E1006 15:07:56.633080     675 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:07:56 ha-481559 kubelet[675]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-481559_kube-system(5f3181798721fe8691d871f051785efc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:07:56 ha-481559 kubelet[675]:  > logger="UnhandledError"
	Oct 06 15:07:56 ha-481559 kubelet[675]: E1006 15:07:56.633109     675 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-481559" podUID="5f3181798721fe8691d871f051785efc"
	Oct 06 15:07:57 ha-481559 kubelet[675]: E1006 15:07:57.722546     675 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-481559 -n ha-481559: exit status 2 (291.88231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-481559" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (499.93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-616465 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1006 15:10:30.521799  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:15:30.512185  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-616465 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (8m19.930226619s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"025c08a9-24c2-4fab-baa6-2bb0a46c10a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-616465] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"197eed31-340b-4cf7-af42-381e12f24855","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21701"}}
	{"specversion":"1.0","id":"c4b811ba-e36a-4e5f-91d9-09b1117a2c6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9abef3f5-4ed7-4016-9aa8-e6c8dca32f45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig"}}
	{"specversion":"1.0","id":"885dcc23-90bc-4ad4-b796-e0a2f10e4010","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube"}}
	{"specversion":"1.0","id":"bcb32a20-f60f-43e4-84b3-55748dff7a72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"968c3cfe-a768-44b1-99a8-7561f60210c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"09830c88-bf19-4c82-bf29-5d96d2e375b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0037872-bde4-4e64-9cd5-b1dc7138781d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7c835654-86cb-4c81-a4f3-d3d3fa2afbb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-616465\" primary control-plane node in \"json-output-616465\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"54f04c38-049c-4838-bb68-5ba91e47eb4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ee10676-8088-4261-8453-fc0602506da7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"82e3a5d3-12ac-4dac-aa68-c89ae6a9233a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"11","message":"Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...","name":"Preparing Kubernetes","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff40e346-3cf5-427f-8e48-99d4bbbc186c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"a9ea346d-aeca-4711-b90e-6e562c08eaa8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"e03c9222-afc4-42d7-be3b-68e2190e16f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Pri
nting the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\
n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-616465 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-616465 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writi
ng \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the ku
belet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001057894s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000227233s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000272344s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000497234s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using
your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at htt
ps://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"b6d405b9-e141-43fc-9a6e-59ee44bb62b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"9aa5dbfd-bb44-4d7b-a2b3-5d7467cd5b4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"d36f2f53-e253-4848-804a-0a78b1e8bfcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the outpu
t from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using
existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[
etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/health
z. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001112227s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000633926s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000668621s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000644963s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v p
ause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10
257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"011770d5-772e-4d22-b0d3-f6e51882db50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system v
erification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/va
r/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy ku
belet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001112227s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000633926s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000668621s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000644963s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/c
rio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:102
57/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher","name":"GUEST_START","url":""}}
	{"specversion":"1.0","id":"ac08bb93-9261-4775-8fad-a6914c94a963","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 start -p json-output-616465 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio": exit status 80
--- FAIL: TestJSONOutput/start/Command (499.93s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 12 has already been assigned to another step:
Generating certificates and keys ...
Cannot use for:
Generating certificates and keys ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 025c08a9-24c2-4fab-baa6-2bb0a46c10a4
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-616465] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 197eed31-340b-4cf7-af42-381e12f24855
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21701"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: c4b811ba-e36a-4e5f-91d9-09b1117a2c6b
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9abef3f5-4ed7-4016-9aa8-e6c8dca32f45
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 885dcc23-90bc-4ad4-b796-e0a2f10e4010
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: bcb32a20-f60f-43e4-84b3-55748dff7a72
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 968c3cfe-a768-44b1-99a8-7561f60210c6
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 09830c88-bf19-4c82-bf29-5d96d2e375b4
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: a0037872-bde4-4e64-9cd5-b1dc7138781d
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7c835654-86cb-4c81-a4f3-d3d3fa2afbb8
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-616465\" primary control-plane node in \"json-output-616465\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 54f04c38-049c-4838-bb68-5ba91e47eb4a
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759382731-21643 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 6ee10676-8088-4261-8453-fc0602506da7
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 82e3a5d3-12ac-4dac-aa68-c89ae6a9233a
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ff40e346-3cf5-427f-8e48-99d4bbbc186c
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a9ea346d-aeca-4711-b90e-6e562c08eaa8
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: e03c9222-afc4-42d7-be3b-68e2190e16f2
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-616465 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-616465 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001057894s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000227233s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000272344s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000497234s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cr
io.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager
check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b6d405b9-e141-43fc-9a6e-59ee44bb62b7
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9aa5dbfd-bb44-4d7b-a2b3-5d7467cd5b4e
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: d36f2f53-e253-4848-804a-0a78b1e8bfcf
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001112227s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000633926s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000668621s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000644963s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WA
RNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 011770d5-772e-4d22-b0d3-f6e51882db50
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001112227s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000633926s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000668621s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000644963s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARN
ING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: ac08bb93-9261-4775-8fad-a6914c94a963
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 025c08a9-24c2-4fab-baa6-2bb0a46c10a4
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-616465] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 197eed31-340b-4cf7-af42-381e12f24855
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21701"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: c4b811ba-e36a-4e5f-91d9-09b1117a2c6b
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9abef3f5-4ed7-4016-9aa8-e6c8dca32f45
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 885dcc23-90bc-4ad4-b796-e0a2f10e4010
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: bcb32a20-f60f-43e4-84b3-55748dff7a72
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 968c3cfe-a768-44b1-99a8-7561f60210c6
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 09830c88-bf19-4c82-bf29-5d96d2e375b4
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: a0037872-bde4-4e64-9cd5-b1dc7138781d
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7c835654-86cb-4c81-a4f3-d3d3fa2afbb8
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-616465\" primary control-plane node in \"json-output-616465\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 54f04c38-049c-4838-bb68-5ba91e47eb4a
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759382731-21643 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 6ee10676-8088-4261-8453-fc0602506da7
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 82e3a5d3-12ac-4dac-aa68-c89ae6a9233a
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ff40e346-3cf5-427f-8e48-99d4bbbc186c
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a9ea346d-aeca-4711-b90e-6e562c08eaa8
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: e03c9222-afc4-42d7-be3b-68e2190e16f2
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-616465 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-616465 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001057894s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000227233s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000272344s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000497234s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cr
io.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager
check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b6d405b9-e141-43fc-9a6e-59ee44bb62b7
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9aa5dbfd-bb44-4d7b-a2b3-5d7467cd5b4e
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: d36f2f53-e253-4848-804a-0a78b1e8bfcf
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001112227s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000633926s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000668621s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000644963s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WA
RNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 011770d5-772e-4d22-b0d3-f6e51882db50
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001112227s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.000633926s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000668621s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000644963s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARN
ING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: ac08bb93-9261-4775-8fad-a6914c94a963
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestMinikubeProfile (507.21s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-651896 --driver=docker  --container-runtime=crio
E1006 15:20:30.521015  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:25:30.521761  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p first-651896 --driver=docker  --container-runtime=crio: exit status 80 (8m23.794990542s)

                                                
                                                
-- stdout --
	* [first-651896] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "first-651896" primary control-plane node in "first-651896" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-651896 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-651896 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000852414s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.00117777s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001354104s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001414821s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001793335s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001036508s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001102177s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001335822s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001793335s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001036508s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001102177s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001335822s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-linux-amd64 start -p first-651896 --driver=docker  --container-runtime=crio": exit status 80
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-06 15:27:08.377048482 +0000 UTC m=+5473.548660404
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect second-674109
helpers_test.go:239: (dbg) Non-zero exit: docker inspect second-674109: exit status 1 (28.998245ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: second-674109

                                                
                                                
** /stderr **
helpers_test.go:241: failed to get docker inspect: exit status 1
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p second-674109 -n second-674109
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p second-674109 -n second-674109: exit status 85 (58.133082ms)

                                                
                                                
-- stdout --
	* Profile "second-674109" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-674109"

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 85 (may be ok)
helpers_test.go:249: "second-674109" host is not running, skipping log retrieval (state="* Profile \"second-674109\" not found. Run \"minikube profile list\" to view all profiles.")
helpers_test.go:175: Cleaning up "second-674109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-674109
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-06 15:27:08.61625017 +0000 UTC m=+5473.787862066
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect first-651896
helpers_test.go:243: (dbg) docker inspect first-651896:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3079a20d5d5b42756544b221d1866df91e24b7182cb39172b7bd59f6098231c5",
	        "Created": "2025-10-06T15:18:49.685190205Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 735719,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T15:18:49.723766382Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3079a20d5d5b42756544b221d1866df91e24b7182cb39172b7bd59f6098231c5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3079a20d5d5b42756544b221d1866df91e24b7182cb39172b7bd59f6098231c5/hostname",
	        "HostsPath": "/var/lib/docker/containers/3079a20d5d5b42756544b221d1866df91e24b7182cb39172b7bd59f6098231c5/hosts",
	        "LogPath": "/var/lib/docker/containers/3079a20d5d5b42756544b221d1866df91e24b7182cb39172b7bd59f6098231c5/3079a20d5d5b42756544b221d1866df91e24b7182cb39172b7bd59f6098231c5-json.log",
	        "Name": "/first-651896",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "first-651896:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "first-651896",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3079a20d5d5b42756544b221d1866df91e24b7182cb39172b7bd59f6098231c5",
	                "LowerDir": "/var/lib/docker/overlay2/cc35a601f1783a3e6b888082d1af6bfcc6ffb4b94ccffc726f08c12c5aa32239-init/diff:/var/lib/docker/overlay2/498c39ad2e273bbda04a4b230222b9767ea2da097b1fe98436168d26143cd080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc35a601f1783a3e6b888082d1af6bfcc6ffb4b94ccffc726f08c12c5aa32239/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc35a601f1783a3e6b888082d1af6bfcc6ffb4b94ccffc726f08c12c5aa32239/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc35a601f1783a3e6b888082d1af6bfcc6ffb4b94ccffc726f08c12c5aa32239/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "first-651896",
	                "Source": "/var/lib/docker/volumes/first-651896/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "first-651896",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "first-651896",
	                "name.minikube.sigs.k8s.io": "first-651896",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7917997298b1699bc2dcff4bfa888d0b8a2738c4608200c280efa4c7981a0a6",
	            "SandboxKey": "/var/run/docker/netns/e7917997298b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32928"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32929"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32932"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32930"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32931"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "first-651896": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:3c:4c:dd:aa:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "50fce3e2ba4a135a79725734f16101d84fbeee9e456e94ccf1b1bcd871f5fbba",
	                    "EndpointID": "db4fa9b92e7912db08ce00ca724af5a0ae362b61f46b3954bb5a94e531d2d323",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "first-651896",
	                        "3079a20d5d5b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p first-651896 -n first-651896
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p first-651896 -n first-651896: exit status 6 (299.593728ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 15:27:08.919351  740269 status.go:458] kubeconfig endpoint: get endpoint: "first-651896" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMinikubeProfile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMinikubeProfile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p first-651896 logs -n 25
helpers_test.go:260: TestMinikubeProfile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │         PROFILE          │   USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ ha-481559 node delete m03 --alsologtostderr -v 5                                                                        │ ha-481559                │ jenkins  │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	│ stop    │ ha-481559 stop --alsologtostderr -v 5                                                                                   │ ha-481559                │ jenkins  │ v1.37.0 │ 06 Oct 25 15:01 UTC │ 06 Oct 25 15:01 UTC │
	│ start   │ ha-481559 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                            │ ha-481559                │ jenkins  │ v1.37.0 │ 06 Oct 25 15:01 UTC │                     │
	│ node    │ ha-481559 node add --control-plane --alsologtostderr -v 5                                                               │ ha-481559                │ jenkins  │ v1.37.0 │ 06 Oct 25 15:07 UTC │                     │
	│ delete  │ -p ha-481559                                                                                                            │ ha-481559                │ jenkins  │ v1.37.0 │ 06 Oct 25 15:08 UTC │ 06 Oct 25 15:08 UTC │
	│ start   │ -p json-output-616465 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ json-output-616465       │ testUser │ v1.37.0 │ 06 Oct 25 15:08 UTC │                     │
	│ pause   │ -p json-output-616465 --output=json --user=testUser                                                                     │ json-output-616465       │ testUser │ v1.37.0 │ 06 Oct 25 15:16 UTC │ 06 Oct 25 15:16 UTC │
	│ unpause │ -p json-output-616465 --output=json --user=testUser                                                                     │ json-output-616465       │ testUser │ v1.37.0 │ 06 Oct 25 15:16 UTC │ 06 Oct 25 15:16 UTC │
	│ stop    │ -p json-output-616465 --output=json --user=testUser                                                                     │ json-output-616465       │ testUser │ v1.37.0 │ 06 Oct 25 15:16 UTC │ 06 Oct 25 15:16 UTC │
	│ delete  │ -p json-output-616465                                                                                                   │ json-output-616465       │ jenkins  │ v1.37.0 │ 06 Oct 25 15:16 UTC │ 06 Oct 25 15:16 UTC │
	│ start   │ -p json-output-error-057185 --memory=3072 --output=json --wait=true --driver=fail                                       │ json-output-error-057185 │ jenkins  │ v1.37.0 │ 06 Oct 25 15:16 UTC │                     │
	│ delete  │ -p json-output-error-057185                                                                                             │ json-output-error-057185 │ jenkins  │ v1.37.0 │ 06 Oct 25 15:16 UTC │ 06 Oct 25 15:16 UTC │
	│ start   │ -p docker-network-627402 --network=                                                                                     │ docker-network-627402    │ jenkins  │ v1.37.0 │ 06 Oct 25 15:16 UTC │ 06 Oct 25 15:17 UTC │
	│ delete  │ -p docker-network-627402                                                                                                │ docker-network-627402    │ jenkins  │ v1.37.0 │ 06 Oct 25 15:17 UTC │ 06 Oct 25 15:17 UTC │
	│ start   │ -p docker-network-020891 --network=bridge                                                                               │ docker-network-020891    │ jenkins  │ v1.37.0 │ 06 Oct 25 15:17 UTC │ 06 Oct 25 15:17 UTC │
	│ delete  │ -p docker-network-020891                                                                                                │ docker-network-020891    │ jenkins  │ v1.37.0 │ 06 Oct 25 15:17 UTC │ 06 Oct 25 15:17 UTC │
	│ start   │ -p existing-network-726483 --network=existing-network                                                                   │ existing-network-726483  │ jenkins  │ v1.37.0 │ 06 Oct 25 15:17 UTC │ 06 Oct 25 15:17 UTC │
	│ delete  │ -p existing-network-726483                                                                                              │ existing-network-726483  │ jenkins  │ v1.37.0 │ 06 Oct 25 15:17 UTC │ 06 Oct 25 15:17 UTC │
	│ start   │ -p custom-subnet-837757 --subnet=192.168.60.0/24                                                                        │ custom-subnet-837757     │ jenkins  │ v1.37.0 │ 06 Oct 25 15:17 UTC │ 06 Oct 25 15:18 UTC │
	│ delete  │ -p custom-subnet-837757                                                                                                 │ custom-subnet-837757     │ jenkins  │ v1.37.0 │ 06 Oct 25 15:18 UTC │ 06 Oct 25 15:18 UTC │
	│ start   │ -p static-ip-097659 --static-ip=192.168.200.200                                                                         │ static-ip-097659         │ jenkins  │ v1.37.0 │ 06 Oct 25 15:18 UTC │ 06 Oct 25 15:18 UTC │
	│ ip      │ static-ip-097659 ip                                                                                                     │ static-ip-097659         │ jenkins  │ v1.37.0 │ 06 Oct 25 15:18 UTC │ 06 Oct 25 15:18 UTC │
	│ delete  │ -p static-ip-097659                                                                                                     │ static-ip-097659         │ jenkins  │ v1.37.0 │ 06 Oct 25 15:18 UTC │ 06 Oct 25 15:18 UTC │
	│ start   │ -p first-651896 --driver=docker  --container-runtime=crio                                                               │ first-651896             │ jenkins  │ v1.37.0 │ 06 Oct 25 15:18 UTC │                     │
	│ delete  │ -p second-674109                                                                                                        │ second-674109            │ jenkins  │ v1.37.0 │ 06 Oct 25 15:27 UTC │ 06 Oct 25 15:27 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 15:18:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 15:18:44.624091  735145 out.go:360] Setting OutFile to fd 1 ...
	I1006 15:18:44.624263  735145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:18:44.624269  735145 out.go:374] Setting ErrFile to fd 2...
	I1006 15:18:44.624275  735145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:18:44.624472  735145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 15:18:44.624963  735145 out.go:368] Setting JSON to false
	I1006 15:18:44.625999  735145 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":21661,"bootTime":1759742264,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 15:18:44.626086  735145 start.go:140] virtualization: kvm guest
	I1006 15:18:44.628009  735145 out.go:179] * [first-651896] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 15:18:44.629482  735145 notify.go:220] Checking for updates...
	I1006 15:18:44.629511  735145 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 15:18:44.630658  735145 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 15:18:44.631865  735145 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 15:18:44.633071  735145 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 15:18:44.634335  735145 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 15:18:44.635620  735145 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 15:18:44.637020  735145 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 15:18:44.660647  735145 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 15:18:44.660707  735145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:18:44.715688  735145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 15:18:44.705736094 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 15:18:44.715825  735145 docker.go:318] overlay module found
	I1006 15:18:44.717529  735145 out.go:179] * Using the docker driver based on user configuration
	I1006 15:18:44.718635  735145 start.go:304] selected driver: docker
	I1006 15:18:44.718642  735145 start.go:924] validating driver "docker" against <nil>
	I1006 15:18:44.718652  735145 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 15:18:44.718737  735145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:18:44.773302  735145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-06 15:18:44.763655917 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 15:18:44.773480  735145 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 15:18:44.774011  735145 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1006 15:18:44.774144  735145 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 15:18:44.775764  735145 out.go:179] * Using Docker driver with root privileges
	I1006 15:18:44.776839  735145 cni.go:84] Creating CNI manager for ""
	I1006 15:18:44.776889  735145 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 15:18:44.776896  735145 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 15:18:44.776952  735145 start.go:348] cluster config:
	{Name:first-651896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-651896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 15:18:44.778078  735145 out.go:179] * Starting "first-651896" primary control-plane node in "first-651896" cluster
	I1006 15:18:44.779155  735145 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 15:18:44.780427  735145 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 15:18:44.781447  735145 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 15:18:44.781481  735145 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 15:18:44.781486  735145 cache.go:58] Caching tarball of preloaded images
	I1006 15:18:44.781559  735145 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 15:18:44.781594  735145 preload.go:233] Found /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1006 15:18:44.781601  735145 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 15:18:44.781914  735145 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/config.json ...
	I1006 15:18:44.781934  735145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/config.json: {Name:mkfd21d1f4cf8f2524630e2b8b980f70ff335aec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:18:44.801546  735145 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 15:18:44.801555  735145 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 15:18:44.801569  735145 cache.go:232] Successfully downloaded all kic artifacts
	I1006 15:18:44.801588  735145 start.go:360] acquireMachinesLock for first-651896: {Name:mk794513e96ccdcfb4a8e5b2be35ab21dcaac6de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 15:18:44.801671  735145 start.go:364] duration metric: took 71.398µs to acquireMachinesLock for "first-651896"
	I1006 15:18:44.801689  735145 start.go:93] Provisioning new machine with config: &{Name:first-651896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-651896 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 15:18:44.801739  735145 start.go:125] createHost starting for "" (driver="docker")
	I1006 15:18:44.803728  735145 out.go:252] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I1006 15:18:44.803921  735145 start.go:159] libmachine.API.Create for "first-651896" (driver="docker")
	I1006 15:18:44.803943  735145 client.go:168] LocalClient.Create starting
	I1006 15:18:44.803986  735145 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem
	I1006 15:18:44.804021  735145 main.go:141] libmachine: Decoding PEM data...
	I1006 15:18:44.804031  735145 main.go:141] libmachine: Parsing certificate...
	I1006 15:18:44.804083  735145 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem
	I1006 15:18:44.804096  735145 main.go:141] libmachine: Decoding PEM data...
	I1006 15:18:44.804108  735145 main.go:141] libmachine: Parsing certificate...
	I1006 15:18:44.804446  735145 cli_runner.go:164] Run: docker network inspect first-651896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 15:18:44.820640  735145 cli_runner.go:211] docker network inspect first-651896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 15:18:44.820690  735145 network_create.go:284] running [docker network inspect first-651896] to gather additional debugging logs...
	I1006 15:18:44.820702  735145 cli_runner.go:164] Run: docker network inspect first-651896
	W1006 15:18:44.836196  735145 cli_runner.go:211] docker network inspect first-651896 returned with exit code 1
	I1006 15:18:44.836240  735145 network_create.go:287] error running [docker network inspect first-651896]: docker network inspect first-651896: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network first-651896 not found
	I1006 15:18:44.836257  735145 network_create.go:289] output of [docker network inspect first-651896]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network first-651896 not found
	
	** /stderr **
	I1006 15:18:44.836381  735145 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 15:18:44.852297  735145 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-193d0db7d41d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:12:84:13:2e:2e} reservation:<nil>}
	I1006 15:18:44.852954  735145 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b35050}
	I1006 15:18:44.852989  735145 network_create.go:124] attempt to create docker network first-651896 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1006 15:18:44.853070  735145 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=first-651896 first-651896
	I1006 15:18:44.907711  735145 network_create.go:108] docker network first-651896 192.168.58.0/24 created
	I1006 15:18:44.907734  735145 kic.go:121] calculated static IP "192.168.58.2" for the "first-651896" container
	I1006 15:18:44.907809  735145 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 15:18:44.924293  735145 cli_runner.go:164] Run: docker volume create first-651896 --label name.minikube.sigs.k8s.io=first-651896 --label created_by.minikube.sigs.k8s.io=true
	I1006 15:18:44.940891  735145 oci.go:103] Successfully created a docker volume first-651896
	I1006 15:18:44.940961  735145 cli_runner.go:164] Run: docker run --rm --name first-651896-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-651896 --entrypoint /usr/bin/test -v first-651896:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 15:18:45.308602  735145 oci.go:107] Successfully prepared a docker volume first-651896
	I1006 15:18:45.308639  735145 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 15:18:45.308662  735145 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 15:18:45.308719  735145 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-651896:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 15:18:49.612722  735145 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-651896:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.303954902s)
	I1006 15:18:49.612748  735145 kic.go:203] duration metric: took 4.304082705s to extract preloaded images to volume ...
	W1006 15:18:49.612843  735145 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1006 15:18:49.612866  735145 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1006 15:18:49.612905  735145 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 15:18:49.670046  735145 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname first-651896 --name first-651896 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-651896 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=first-651896 --network first-651896 --ip 192.168.58.2 --volume first-651896:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 15:18:49.928646  735145 cli_runner.go:164] Run: docker container inspect first-651896 --format={{.State.Running}}
	I1006 15:18:49.946185  735145 cli_runner.go:164] Run: docker container inspect first-651896 --format={{.State.Status}}
	I1006 15:18:49.964649  735145 cli_runner.go:164] Run: docker exec first-651896 stat /var/lib/dpkg/alternatives/iptables
	I1006 15:18:50.011412  735145 oci.go:144] the created container "first-651896" has a running status.
	I1006 15:18:50.011438  735145 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/first-651896/id_rsa...
	I1006 15:18:50.140655  735145 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-626179/.minikube/machines/first-651896/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 15:18:50.168932  735145 cli_runner.go:164] Run: docker container inspect first-651896 --format={{.State.Status}}
	I1006 15:18:50.191686  735145 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 15:18:50.191699  735145 kic_runner.go:114] Args: [docker exec --privileged first-651896 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 15:18:50.239093  735145 cli_runner.go:164] Run: docker container inspect first-651896 --format={{.State.Status}}
	I1006 15:18:50.257764  735145 machine.go:93] provisionDockerMachine start ...
	I1006 15:18:50.257867  735145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-651896
	I1006 15:18:50.275514  735145 main.go:141] libmachine: Using SSH client type: native
	I1006 15:18:50.275878  735145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32928 <nil> <nil>}
	I1006 15:18:50.275897  735145 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 15:18:50.276671  735145 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45028->127.0.0.1:32928: read: connection reset by peer
	I1006 15:18:53.423387  735145 main.go:141] libmachine: SSH cmd err, output: <nil>: first-651896
	
	I1006 15:18:53.423409  735145 ubuntu.go:182] provisioning hostname "first-651896"
	I1006 15:18:53.423475  735145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-651896
	I1006 15:18:53.441965  735145 main.go:141] libmachine: Using SSH client type: native
	I1006 15:18:53.442191  735145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32928 <nil> <nil>}
	I1006 15:18:53.442198  735145 main.go:141] libmachine: About to run SSH command:
	sudo hostname first-651896 && echo "first-651896" | sudo tee /etc/hostname
	I1006 15:18:53.595828  735145 main.go:141] libmachine: SSH cmd err, output: <nil>: first-651896
	
	I1006 15:18:53.595896  735145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-651896
	I1006 15:18:53.614985  735145 main.go:141] libmachine: Using SSH client type: native
	I1006 15:18:53.615219  735145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32928 <nil> <nil>}
	I1006 15:18:53.615236  735145 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfirst-651896' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 first-651896/g' /etc/hosts;
				else 
					echo '127.0.1.1 first-651896' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 15:18:53.759147  735145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 15:18:53.759166  735145 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-626179/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-626179/.minikube}
	I1006 15:18:53.759196  735145 ubuntu.go:190] setting up certificates
	I1006 15:18:53.759220  735145 provision.go:84] configureAuth start
	I1006 15:18:53.759287  735145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-651896
	I1006 15:18:53.776818  735145 provision.go:143] copyHostCerts
	I1006 15:18:53.776871  735145 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem, removing ...
	I1006 15:18:53.776879  735145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem
	I1006 15:18:53.776948  735145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/ca.pem (1082 bytes)
	I1006 15:18:53.777053  735145 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem, removing ...
	I1006 15:18:53.777057  735145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem
	I1006 15:18:53.777083  735145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/cert.pem (1123 bytes)
	I1006 15:18:53.777156  735145 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem, removing ...
	I1006 15:18:53.777159  735145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem
	I1006 15:18:53.777184  735145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-626179/.minikube/key.pem (1679 bytes)
	I1006 15:18:53.777291  735145 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem org=jenkins.first-651896 san=[127.0.0.1 192.168.58.2 first-651896 localhost minikube]
	I1006 15:18:54.238693  735145 provision.go:177] copyRemoteCerts
	I1006 15:18:54.238759  735145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 15:18:54.238793  735145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-651896
	I1006 15:18:54.256637  735145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/first-651896/id_rsa Username:docker}
	I1006 15:18:54.357886  735145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1006 15:18:54.377001  735145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 15:18:54.393940  735145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 15:18:54.411373  735145 provision.go:87] duration metric: took 652.143209ms to configureAuth
	I1006 15:18:54.411388  735145 ubuntu.go:206] setting minikube options for container-runtime
	I1006 15:18:54.411543  735145 config.go:182] Loaded profile config "first-651896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 15:18:54.411627  735145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-651896
	I1006 15:18:54.429188  735145 main.go:141] libmachine: Using SSH client type: native
	I1006 15:18:54.429465  735145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32928 <nil> <nil>}
	I1006 15:18:54.429480  735145 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 15:18:54.680554  735145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 15:18:54.680571  735145 machine.go:96] duration metric: took 4.422792951s to provisionDockerMachine
	I1006 15:18:54.680580  735145 client.go:171] duration metric: took 9.876633313s to LocalClient.Create
	I1006 15:18:54.680597  735145 start.go:167] duration metric: took 9.876679253s to libmachine.API.Create "first-651896"
	I1006 15:18:54.680603  735145 start.go:293] postStartSetup for "first-651896" (driver="docker")
	I1006 15:18:54.680612  735145 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 15:18:54.680680  735145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 15:18:54.680725  735145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-651896
	I1006 15:18:54.701199  735145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/first-651896/id_rsa Username:docker}
	I1006 15:18:54.803942  735145 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 15:18:54.807523  735145 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 15:18:54.807542  735145 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 15:18:54.807550  735145 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/addons for local assets ...
	I1006 15:18:54.807598  735145 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-626179/.minikube/files for local assets ...
	I1006 15:18:54.807664  735145 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem -> 6297192.pem in /etc/ssl/certs
	I1006 15:18:54.807740  735145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 15:18:54.815625  735145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 15:18:54.834691  735145 start.go:296] duration metric: took 154.076333ms for postStartSetup
	I1006 15:18:54.835032  735145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-651896
	I1006 15:18:54.851860  735145 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/config.json ...
	I1006 15:18:54.852117  735145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 15:18:54.852157  735145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-651896
	I1006 15:18:54.868471  735145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/first-651896/id_rsa Username:docker}
	I1006 15:18:54.966235  735145 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 15:18:54.970688  735145 start.go:128] duration metric: took 10.168935814s to createHost
	I1006 15:18:54.970703  735145 start.go:83] releasing machines lock for "first-651896", held for 10.169026072s
	I1006 15:18:54.970762  735145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-651896
	I1006 15:18:54.987506  735145 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 15:18:54.987511  735145 ssh_runner.go:195] Run: cat /version.json
	I1006 15:18:54.987559  735145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-651896
	I1006 15:18:54.987559  735145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-651896
	I1006 15:18:55.004576  735145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/first-651896/id_rsa Username:docker}
	I1006 15:18:55.005462  735145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/first-651896/id_rsa Username:docker}
	I1006 15:18:55.162122  735145 ssh_runner.go:195] Run: systemctl --version
	I1006 15:18:55.168407  735145 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 15:18:55.203025  735145 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 15:18:55.207717  735145 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 15:18:55.207773  735145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 15:18:55.232947  735145 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 15:18:55.232962  735145 start.go:495] detecting cgroup driver to use...
	I1006 15:18:55.232993  735145 detect.go:190] detected "systemd" cgroup driver on host os
	I1006 15:18:55.233039  735145 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 15:18:55.250126  735145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 15:18:55.261391  735145 docker.go:218] disabling cri-docker service (if available) ...
	I1006 15:18:55.261425  735145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 15:18:55.277143  735145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 15:18:55.293306  735145 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 15:18:55.373680  735145 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 15:18:55.457012  735145 docker.go:234] disabling docker service ...
	I1006 15:18:55.457070  735145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 15:18:55.476243  735145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 15:18:55.488180  735145 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 15:18:55.570050  735145 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 15:18:55.653777  735145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 15:18:55.666008  735145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 15:18:55.679729  735145 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 15:18:55.679772  735145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:18:55.689142  735145 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 15:18:55.689215  735145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:18:55.697671  735145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:18:55.705781  735145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:18:55.714295  735145 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 15:18:55.722085  735145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:18:55.730430  735145 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:18:55.743586  735145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 15:18:55.752234  735145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 15:18:55.759418  735145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 15:18:55.766515  735145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:18:55.844623  735145 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 15:18:55.947707  735145 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 15:18:55.947771  735145 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 15:18:55.951756  735145 start.go:563] Will wait 60s for crictl version
	I1006 15:18:55.951810  735145 ssh_runner.go:195] Run: which crictl
	I1006 15:18:55.955250  735145 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 15:18:55.980135  735145 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 15:18:55.980190  735145 ssh_runner.go:195] Run: crio --version
	I1006 15:18:56.008904  735145 ssh_runner.go:195] Run: crio --version
	I1006 15:18:56.038077  735145 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 15:18:56.039083  735145 cli_runner.go:164] Run: docker network inspect first-651896 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 15:18:56.055649  735145 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1006 15:18:56.059765  735145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 15:18:56.069748  735145 kubeadm.go:883] updating cluster {Name:first-651896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-651896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s} ...
	I1006 15:18:56.069853  735145 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 15:18:56.069902  735145 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 15:18:56.100705  735145 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 15:18:56.100715  735145 crio.go:433] Images already preloaded, skipping extraction
	I1006 15:18:56.100758  735145 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 15:18:56.125082  735145 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 15:18:56.125099  735145 cache_images.go:85] Images are preloaded, skipping loading
	I1006 15:18:56.125106  735145 kubeadm.go:934] updating node { 192.168.58.2 8443 v1.34.1 crio true true} ...
	I1006 15:18:56.125192  735145 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=first-651896 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:first-651896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 15:18:56.125286  735145 ssh_runner.go:195] Run: crio config
	I1006 15:18:56.169331  735145 cni.go:84] Creating CNI manager for ""
	I1006 15:18:56.169347  735145 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 15:18:56.169366  735145 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 15:18:56.169387  735145 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:first-651896 NodeName:first-651896 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 15:18:56.169507  735145 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "first-651896"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.58.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 15:18:56.169564  735145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 15:18:56.177548  735145 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 15:18:56.177614  735145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 15:18:56.184942  735145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1006 15:18:56.196925  735145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 15:18:56.211032  735145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1006 15:18:56.223397  735145 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1006 15:18:56.226757  735145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 15:18:56.236335  735145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 15:18:56.311413  735145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 15:18:56.335416  735145 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896 for IP: 192.168.58.2
	I1006 15:18:56.335431  735145 certs.go:195] generating shared ca certs ...
	I1006 15:18:56.335453  735145 certs.go:227] acquiring lock for ca certs: {Name:mka0cc25cb6a953e937aa825fc55167759271aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:18:56.335617  735145 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key
	I1006 15:18:56.335665  735145 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key
	I1006 15:18:56.335671  735145 certs.go:257] generating profile certs ...
	I1006 15:18:56.335721  735145 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/client.key
	I1006 15:18:56.335741  735145 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/client.crt with IP's: []
	I1006 15:18:56.763505  735145 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/client.crt ...
	I1006 15:18:56.763526  735145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/client.crt: {Name:mkd0a375bac8d14137bcfc10f112fa0d378875fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:18:56.763740  735145 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/client.key ...
	I1006 15:18:56.763747  735145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/client.key: {Name:mk644118547fee382c45d6b2e61c41a87a7f4a9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:18:56.763847  735145 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/apiserver.key.78af6301
	I1006 15:18:56.763864  735145 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/apiserver.crt.78af6301 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.58.2]
	I1006 15:18:56.839468  735145 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/apiserver.crt.78af6301 ...
	I1006 15:18:56.839488  735145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/apiserver.crt.78af6301: {Name:mk9ff15975cda3005711498c1414bdffc32cbe1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:18:56.839663  735145 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/apiserver.key.78af6301 ...
	I1006 15:18:56.839686  735145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/apiserver.key.78af6301: {Name:mk79397a0807aed767fbb5b20833415dcd97ef19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:18:56.839763  735145 certs.go:382] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/apiserver.crt.78af6301 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/apiserver.crt
	I1006 15:18:56.839866  735145 certs.go:386] copying /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/apiserver.key.78af6301 -> /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/apiserver.key
	I1006 15:18:56.839923  735145 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/proxy-client.key
	I1006 15:18:56.839935  735145 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/proxy-client.crt with IP's: []
	I1006 15:18:57.103994  735145 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/proxy-client.crt ...
	I1006 15:18:57.104011  735145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/proxy-client.crt: {Name:mk74609b12cb67c1aae8713c8de8e60d11ea7723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:18:57.104179  735145 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/proxy-client.key ...
	I1006 15:18:57.104185  735145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/proxy-client.key: {Name:mk6d15c77225b75f768ffee844595ee590644068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 15:18:57.104372  735145 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem (1338 bytes)
	W1006 15:18:57.104401  735145 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719_empty.pem, impossibly tiny 0 bytes
	I1006 15:18:57.104407  735145 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 15:18:57.104426  735145 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/ca.pem (1082 bytes)
	I1006 15:18:57.104443  735145 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/cert.pem (1123 bytes)
	I1006 15:18:57.104460  735145 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/certs/key.pem (1679 bytes)
	I1006 15:18:57.104493  735145 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem (1708 bytes)
	I1006 15:18:57.105013  735145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 15:18:57.123933  735145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 15:18:57.140620  735145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 15:18:57.156860  735145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1006 15:18:57.173428  735145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1006 15:18:57.189470  735145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 15:18:57.206113  735145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 15:18:57.222348  735145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/first-651896/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 15:18:57.239063  735145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/ssl/certs/6297192.pem --> /usr/share/ca-certificates/6297192.pem (1708 bytes)
	I1006 15:18:57.257730  735145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 15:18:57.275649  735145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-626179/.minikube/certs/629719.pem --> /usr/share/ca-certificates/629719.pem (1338 bytes)
	I1006 15:18:57.293272  735145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 15:18:57.304965  735145 ssh_runner.go:195] Run: openssl version
	I1006 15:18:57.310758  735145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 15:18:57.318531  735145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:18:57.321830  735145 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:18:57.321873  735145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 15:18:57.355153  735145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 15:18:57.363149  735145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629719.pem && ln -fs /usr/share/ca-certificates/629719.pem /etc/ssl/certs/629719.pem"
	I1006 15:18:57.370949  735145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629719.pem
	I1006 15:18:57.374321  735145 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:13 /usr/share/ca-certificates/629719.pem
	I1006 15:18:57.374359  735145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629719.pem
	I1006 15:18:57.408181  735145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629719.pem /etc/ssl/certs/51391683.0"
	I1006 15:18:57.417096  735145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6297192.pem && ln -fs /usr/share/ca-certificates/6297192.pem /etc/ssl/certs/6297192.pem"
	I1006 15:18:57.425464  735145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6297192.pem
	I1006 15:18:57.429072  735145 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:13 /usr/share/ca-certificates/6297192.pem
	I1006 15:18:57.429117  735145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6297192.pem
	I1006 15:18:57.463455  735145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6297192.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 15:18:57.472421  735145 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 15:18:57.476116  735145 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 15:18:57.476162  735145 kubeadm.go:400] StartCluster: {Name:first-651896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-651896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Aut
oPauseInterval:1m0s}
	I1006 15:18:57.476253  735145 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 15:18:57.476294  735145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 15:18:57.504303  735145 cri.go:89] found id: ""
	I1006 15:18:57.504363  735145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 15:18:57.512555  735145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 15:18:57.520454  735145 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 15:18:57.520493  735145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 15:18:57.527884  735145 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 15:18:57.527891  735145 kubeadm.go:157] found existing configuration files:
	
	I1006 15:18:57.527925  735145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 15:18:57.535233  735145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 15:18:57.535322  735145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 15:18:57.542164  735145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 15:18:57.549362  735145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 15:18:57.549398  735145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 15:18:57.556196  735145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 15:18:57.563192  735145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 15:18:57.563247  735145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 15:18:57.569894  735145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 15:18:57.577097  735145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 15:18:57.577127  735145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 15:18:57.584178  735145 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 15:18:57.621935  735145 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 15:18:57.621988  735145 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 15:18:57.641720  735145 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 15:18:57.641797  735145 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 15:18:57.641837  735145 kubeadm.go:318] OS: Linux
	I1006 15:18:57.641898  735145 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 15:18:57.641952  735145 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 15:18:57.642009  735145 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 15:18:57.642067  735145 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 15:18:57.642113  735145 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 15:18:57.642149  735145 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 15:18:57.642193  735145 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 15:18:57.642253  735145 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 15:18:57.698804  735145 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 15:18:57.698920  735145 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 15:18:57.699032  735145 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 15:18:57.707302  735145 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 15:18:57.709275  735145 out.go:252]   - Generating certificates and keys ...
	I1006 15:18:57.709363  735145 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 15:18:57.709453  735145 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 15:18:57.955985  735145 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 15:18:58.244943  735145 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 15:18:58.771354  735145 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 15:19:00.457382  735145 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 15:19:00.965972  735145 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 15:19:00.966105  735145 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [first-651896 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1006 15:19:01.275111  735145 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 15:19:01.275291  735145 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [first-651896 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1006 15:19:01.766728  735145 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 15:19:01.877400  735145 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 15:19:01.930496  735145 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 15:19:01.930566  735145 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 15:19:02.272547  735145 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 15:19:02.593067  735145 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 15:19:02.835078  735145 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 15:19:03.030871  735145 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 15:19:03.169605  735145 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 15:19:03.170243  735145 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 15:19:03.175280  735145 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 15:19:03.176716  735145 out.go:252]   - Booting up control plane ...
	I1006 15:19:03.176800  735145 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 15:19:03.176857  735145 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 15:19:03.177495  735145 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 15:19:03.204941  735145 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 15:19:03.205065  735145 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 15:19:03.211706  735145 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 15:19:03.211965  735145 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 15:19:03.212001  735145 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 15:19:03.304712  735145 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 15:19:03.304888  735145 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 15:19:04.305389  735145 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000852414s
	I1006 15:19:04.308018  735145 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 15:19:04.308109  735145 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1006 15:19:04.308233  735145 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 15:19:04.308338  735145 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 15:23:04.309654  735145 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00117777s
	I1006 15:23:04.309817  735145 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001354104s
	I1006 15:23:04.309963  735145 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001414821s
	I1006 15:23:04.310022  735145 kubeadm.go:318] 
	I1006 15:23:04.310225  735145 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 15:23:04.310339  735145 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 15:23:04.310446  735145 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 15:23:04.310628  735145 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 15:23:04.310751  735145 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 15:23:04.310840  735145 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 15:23:04.310848  735145 kubeadm.go:318] 
	I1006 15:23:04.314276  735145 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 15:23:04.314375  735145 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 15:23:04.314862  735145 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1006 15:23:04.314926  735145 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1006 15:23:04.315095  735145 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-651896 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-651896 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000852414s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.00117777s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001354104s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001414821s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 15:23:04.315176  735145 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 15:23:04.765170  735145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 15:23:04.777962  735145 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 15:23:04.778019  735145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 15:23:04.785910  735145 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 15:23:04.785923  735145 kubeadm.go:157] found existing configuration files:
	
	I1006 15:23:04.785967  735145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 15:23:04.793471  735145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 15:23:04.793513  735145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 15:23:04.800636  735145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 15:23:04.807774  735145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 15:23:04.807823  735145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 15:23:04.814764  735145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 15:23:04.822356  735145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 15:23:04.822411  735145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 15:23:04.829485  735145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 15:23:04.836811  735145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 15:23:04.836855  735145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 15:23:04.843765  735145 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 15:23:04.898070  735145 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1006 15:23:04.957030  735145 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 15:27:07.934660  735145 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 15:27:07.934876  735145 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 15:27:07.937343  735145 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 15:27:07.937392  735145 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 15:27:07.937481  735145 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 15:27:07.937538  735145 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1006 15:27:07.937577  735145 kubeadm.go:318] OS: Linux
	I1006 15:27:07.937615  735145 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 15:27:07.937650  735145 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 15:27:07.937725  735145 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 15:27:07.937797  735145 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 15:27:07.937869  735145 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 15:27:07.937926  735145 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 15:27:07.937966  735145 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 15:27:07.937999  735145 kubeadm.go:318] CGROUPS_IO: enabled
	I1006 15:27:07.938059  735145 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 15:27:07.938150  735145 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 15:27:07.938252  735145 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 15:27:07.938304  735145 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 15:27:07.940392  735145 out.go:252]   - Generating certificates and keys ...
	I1006 15:27:07.940454  735145 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 15:27:07.940511  735145 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 15:27:07.940581  735145 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 15:27:07.940650  735145 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 15:27:07.940711  735145 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 15:27:07.940759  735145 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 15:27:07.940806  735145 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 15:27:07.940859  735145 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 15:27:07.940915  735145 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 15:27:07.940977  735145 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 15:27:07.941026  735145 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 15:27:07.941110  735145 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 15:27:07.941168  735145 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 15:27:07.941237  735145 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 15:27:07.941299  735145 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 15:27:07.941350  735145 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 15:27:07.941396  735145 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 15:27:07.941462  735145 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 15:27:07.941518  735145 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 15:27:07.942656  735145 out.go:252]   - Booting up control plane ...
	I1006 15:27:07.942724  735145 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 15:27:07.942793  735145 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 15:27:07.942849  735145 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 15:27:07.942936  735145 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 15:27:07.943022  735145 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 15:27:07.943107  735145 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 15:27:07.943195  735145 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 15:27:07.943244  735145 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 15:27:07.943349  735145 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 15:27:07.943442  735145 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 15:27:07.943489  735145 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001793335s
	I1006 15:27:07.943561  735145 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 15:27:07.943629  735145 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1006 15:27:07.943737  735145 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 15:27:07.943853  735145 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 15:27:07.943938  735145 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001036508s
	I1006 15:27:07.944026  735145 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001102177s
	I1006 15:27:07.944104  735145 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001335822s
	I1006 15:27:07.944108  735145 kubeadm.go:318] 
	I1006 15:27:07.944183  735145 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 15:27:07.944264  735145 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 15:27:07.944334  735145 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 15:27:07.944409  735145 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 15:27:07.944465  735145 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 15:27:07.944532  735145 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 15:27:07.944556  735145 kubeadm.go:318] 
	I1006 15:27:07.944599  735145 kubeadm.go:402] duration metric: took 8m10.468440737s to StartCluster
	I1006 15:27:07.944653  735145 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 15:27:07.944702  735145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 15:27:07.972410  735145 cri.go:89] found id: ""
	I1006 15:27:07.972443  735145 logs.go:282] 0 containers: []
	W1006 15:27:07.972451  735145 logs.go:284] No container was found matching "kube-apiserver"
	I1006 15:27:07.972457  735145 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 15:27:07.972521  735145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 15:27:07.999002  735145 cri.go:89] found id: ""
	I1006 15:27:07.999018  735145 logs.go:282] 0 containers: []
	W1006 15:27:07.999025  735145 logs.go:284] No container was found matching "etcd"
	I1006 15:27:07.999030  735145 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 15:27:07.999081  735145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 15:27:08.024908  735145 cri.go:89] found id: ""
	I1006 15:27:08.024933  735145 logs.go:282] 0 containers: []
	W1006 15:27:08.024940  735145 logs.go:284] No container was found matching "coredns"
	I1006 15:27:08.024945  735145 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 15:27:08.024994  735145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 15:27:08.049904  735145 cri.go:89] found id: ""
	I1006 15:27:08.049921  735145 logs.go:282] 0 containers: []
	W1006 15:27:08.049928  735145 logs.go:284] No container was found matching "kube-scheduler"
	I1006 15:27:08.049933  735145 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 15:27:08.049980  735145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 15:27:08.075859  735145 cri.go:89] found id: ""
	I1006 15:27:08.075874  735145 logs.go:282] 0 containers: []
	W1006 15:27:08.075882  735145 logs.go:284] No container was found matching "kube-proxy"
	I1006 15:27:08.075888  735145 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 15:27:08.075936  735145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 15:27:08.101930  735145 cri.go:89] found id: ""
	I1006 15:27:08.101949  735145 logs.go:282] 0 containers: []
	W1006 15:27:08.101956  735145 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 15:27:08.101964  735145 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 15:27:08.102028  735145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 15:27:08.128105  735145 cri.go:89] found id: ""
	I1006 15:27:08.128121  735145 logs.go:282] 0 containers: []
	W1006 15:27:08.128128  735145 logs.go:284] No container was found matching "kindnet"
	I1006 15:27:08.128143  735145 logs.go:123] Gathering logs for describe nodes ...
	I1006 15:27:08.128155  735145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 15:27:08.188728  735145 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 15:27:08.181281    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:27:08.181848    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:27:08.183445    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:27:08.183910    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:27:08.185509    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 15:27:08.181281    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:27:08.181848    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:27:08.183445    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:27:08.183910    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:27:08.185509    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 15:27:08.188750  735145 logs.go:123] Gathering logs for CRI-O ...
	I1006 15:27:08.188760  735145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 15:27:08.250523  735145 logs.go:123] Gathering logs for container status ...
	I1006 15:27:08.250548  735145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 15:27:08.280458  735145 logs.go:123] Gathering logs for kubelet ...
	I1006 15:27:08.280476  735145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 15:27:08.345984  735145 logs.go:123] Gathering logs for dmesg ...
	I1006 15:27:08.346006  735145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1006 15:27:08.360134  735145 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001793335s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001036508s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001102177s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001335822s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 15:27:08.360179  735145 out.go:285] * 
	W1006 15:27:08.360279  735145 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001793335s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001036508s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001102177s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001335822s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 15:27:08.360294  735145 out.go:285] * 
	W1006 15:27:08.362133  735145 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 15:27:08.365573  735145 out.go:203] 
	W1006 15:27:08.366559  735145 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001793335s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.001036508s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001102177s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001335822s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 15:27:08.366583  735145 out.go:285] * 
	I1006 15:27:08.367949  735145 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 15:27:02 first-651896 crio[780]: time="2025-10-06T15:27:02.519735788Z" level=info msg="createCtr: removing container 0dbeeed99f3639767e5bf65c294ef13de1ee56388296de544543e4a33b2b2008" id=81c5e02d-f2d1-462c-bad1-71c71311a01a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:27:02 first-651896 crio[780]: time="2025-10-06T15:27:02.519767503Z" level=info msg="createCtr: deleting container 0dbeeed99f3639767e5bf65c294ef13de1ee56388296de544543e4a33b2b2008 from storage" id=81c5e02d-f2d1-462c-bad1-71c71311a01a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:27:02 first-651896 crio[780]: time="2025-10-06T15:27:02.5219415Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-first-651896_kube-system_228ecfa79e71ce8bddc2be722bccb3a1_0" id=81c5e02d-f2d1-462c-bad1-71c71311a01a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:27:04 first-651896 crio[780]: time="2025-10-06T15:27:04.492810841Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=cac9459e-4fbe-4454-b084-a944a94b3af2 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:27:04 first-651896 crio[780]: time="2025-10-06T15:27:04.493725588Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=6be6fd28-5cfc-47c8-95fd-cd6a12478613 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:27:04 first-651896 crio[780]: time="2025-10-06T15:27:04.494584642Z" level=info msg="Creating container: kube-system/etcd-first-651896/etcd" id=51e32bd2-81a2-4d8f-94ce-0b5bc68bafb0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:27:04 first-651896 crio[780]: time="2025-10-06T15:27:04.49488676Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:27:04 first-651896 crio[780]: time="2025-10-06T15:27:04.49861393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:27:04 first-651896 crio[780]: time="2025-10-06T15:27:04.49909431Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:27:04 first-651896 crio[780]: time="2025-10-06T15:27:04.518170881Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=51e32bd2-81a2-4d8f-94ce-0b5bc68bafb0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:27:04 first-651896 crio[780]: time="2025-10-06T15:27:04.519689914Z" level=info msg="createCtr: deleting container ID 570ccce0c273575f2b3faa447e9ff678606789cb1a5f26c8262953b10ebf7abc from idIndex" id=51e32bd2-81a2-4d8f-94ce-0b5bc68bafb0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:27:04 first-651896 crio[780]: time="2025-10-06T15:27:04.519722939Z" level=info msg="createCtr: removing container 570ccce0c273575f2b3faa447e9ff678606789cb1a5f26c8262953b10ebf7abc" id=51e32bd2-81a2-4d8f-94ce-0b5bc68bafb0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:27:04 first-651896 crio[780]: time="2025-10-06T15:27:04.519750339Z" level=info msg="createCtr: deleting container 570ccce0c273575f2b3faa447e9ff678606789cb1a5f26c8262953b10ebf7abc from storage" id=51e32bd2-81a2-4d8f-94ce-0b5bc68bafb0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:27:04 first-651896 crio[780]: time="2025-10-06T15:27:04.521554575Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-first-651896_kube-system_dc27fd149a977857aeeafa82c10c08b3_0" id=51e32bd2-81a2-4d8f-94ce-0b5bc68bafb0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:27:07 first-651896 crio[780]: time="2025-10-06T15:27:07.492348703Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=cc3af9b5-36ec-4e4e-ac7b-27468cfb240b name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:27:07 first-651896 crio[780]: time="2025-10-06T15:27:07.49314224Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=6a61ff1c-a18e-495a-a40c-f6b7ffa038bb name=/runtime.v1.ImageService/ImageStatus
	Oct 06 15:27:07 first-651896 crio[780]: time="2025-10-06T15:27:07.495199543Z" level=info msg="Creating container: kube-system/kube-apiserver-first-651896/kube-apiserver" id=2e5c5e62-b984-4d7b-989a-598e21379970 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:27:07 first-651896 crio[780]: time="2025-10-06T15:27:07.495645186Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:27:07 first-651896 crio[780]: time="2025-10-06T15:27:07.499435887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:27:07 first-651896 crio[780]: time="2025-10-06T15:27:07.499825441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 15:27:07 first-651896 crio[780]: time="2025-10-06T15:27:07.514474494Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2e5c5e62-b984-4d7b-989a-598e21379970 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:27:07 first-651896 crio[780]: time="2025-10-06T15:27:07.515742553Z" level=info msg="createCtr: deleting container ID 7f29a9285553ad0ec251cb1241291b8484b5f81e21016f6c49562030d1bb4501 from idIndex" id=2e5c5e62-b984-4d7b-989a-598e21379970 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:27:07 first-651896 crio[780]: time="2025-10-06T15:27:07.5157734Z" level=info msg="createCtr: removing container 7f29a9285553ad0ec251cb1241291b8484b5f81e21016f6c49562030d1bb4501" id=2e5c5e62-b984-4d7b-989a-598e21379970 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:27:07 first-651896 crio[780]: time="2025-10-06T15:27:07.515799985Z" level=info msg="createCtr: deleting container 7f29a9285553ad0ec251cb1241291b8484b5f81e21016f6c49562030d1bb4501 from storage" id=2e5c5e62-b984-4d7b-989a-598e21379970 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 15:27:07 first-651896 crio[780]: time="2025-10-06T15:27:07.517866643Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-first-651896_kube-system_e65d0e2d3d3f2fd9081f97de9f5b3864_0" id=2e5c5e62-b984-4d7b-989a-598e21379970 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 15:27:09.498041    2585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:27:09.498591    2585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:27:09.500199    2585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:27:09.500665    2585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 15:27:09.502118    2585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 15:27:09 up  6:09,  0 user,  load average: 0.08, 0.22, 0.25
	Linux first-651896 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 15:27:02 first-651896 kubelet[1825]: E1006 15:27:02.522369    1825 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:27:02 first-651896 kubelet[1825]:         container kube-controller-manager start failed in pod kube-controller-manager-first-651896_kube-system(228ecfa79e71ce8bddc2be722bccb3a1): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:27:02 first-651896 kubelet[1825]:  > logger="UnhandledError"
	Oct 06 15:27:02 first-651896 kubelet[1825]: E1006 15:27:02.522411    1825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-first-651896" podUID="228ecfa79e71ce8bddc2be722bccb3a1"
	Oct 06 15:27:04 first-651896 kubelet[1825]: E1006 15:27:04.118708    1825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.58.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/first-651896?timeout=10s\": dial tcp 192.168.58.2:8443: connect: connection refused" interval="7s"
	Oct 06 15:27:04 first-651896 kubelet[1825]: I1006 15:27:04.270506    1825 kubelet_node_status.go:75] "Attempting to register node" node="first-651896"
	Oct 06 15:27:04 first-651896 kubelet[1825]: E1006 15:27:04.270889    1825 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.58.2:8443/api/v1/nodes\": dial tcp 192.168.58.2:8443: connect: connection refused" node="first-651896"
	Oct 06 15:27:04 first-651896 kubelet[1825]: E1006 15:27:04.492412    1825 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-651896\" not found" node="first-651896"
	Oct 06 15:27:04 first-651896 kubelet[1825]: E1006 15:27:04.521806    1825 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:27:04 first-651896 kubelet[1825]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:27:04 first-651896 kubelet[1825]:  > podSandboxID="2f75ad995876054b901411a6bdf14770746d2c91e371c4444a3143eb5d08a07c"
	Oct 06 15:27:04 first-651896 kubelet[1825]: E1006 15:27:04.521904    1825 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:27:04 first-651896 kubelet[1825]:         container etcd start failed in pod etcd-first-651896_kube-system(dc27fd149a977857aeeafa82c10c08b3): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:27:04 first-651896 kubelet[1825]:  > logger="UnhandledError"
	Oct 06 15:27:04 first-651896 kubelet[1825]: E1006 15:27:04.521941    1825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-first-651896" podUID="dc27fd149a977857aeeafa82c10c08b3"
	Oct 06 15:27:07 first-651896 kubelet[1825]: E1006 15:27:07.491988    1825 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-651896\" not found" node="first-651896"
	Oct 06 15:27:07 first-651896 kubelet[1825]: E1006 15:27:07.506596    1825 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"first-651896\" not found"
	Oct 06 15:27:07 first-651896 kubelet[1825]: E1006 15:27:07.518107    1825 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 15:27:07 first-651896 kubelet[1825]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:27:07 first-651896 kubelet[1825]:  > podSandboxID="34f8c968f3745307c8f5de325fe2ab5d32a38e916e5cc78588a408fd15849bf0"
	Oct 06 15:27:07 first-651896 kubelet[1825]: E1006 15:27:07.518193    1825 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 15:27:07 first-651896 kubelet[1825]:         container kube-apiserver start failed in pod kube-apiserver-first-651896_kube-system(e65d0e2d3d3f2fd9081f97de9f5b3864): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 15:27:07 first-651896 kubelet[1825]:  > logger="UnhandledError"
	Oct 06 15:27:07 first-651896 kubelet[1825]: E1006 15:27:07.518240    1825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-first-651896" podUID="e65d0e2d3d3f2fd9081f97de9f5b3864"
	Oct 06 15:27:09 first-651896 kubelet[1825]: E1006 15:27:09.247406    1825 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.58.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.58.2:8443: connect: connection refused" event="&Event{ObjectMeta:{first-651896.186bf03473771f5a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:first-651896,UID:first-651896,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node first-651896 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:first-651896,},FirstTimestamp:2025-10-06 15:23:07.484462938 +0000 UTC m=+0.558007127,LastTimestamp:2025-10-06 15:23:07.484462938 +0000 UTC m=+0.558007127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:first-651896,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p first-651896 -n first-651896
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p first-651896 -n first-651896: exit status 6 (286.334544ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 15:27:09.864913  740603 status.go:458] kubeconfig endpoint: get endpoint: "first-651896" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "first-651896" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "first-651896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-651896
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-651896: (1.90687548s)
--- FAIL: TestMinikubeProfile (507.21s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (7200.059s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-833839
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-833839-m01 --driver=docker  --container-runtime=crio
E1006 15:51:53.603028  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:30.521623  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 2h0m0s
	running tests:
		TestMultiNode (28m17s)
		TestMultiNode/serial (28m17s)
		TestMultiNode/serial/ValidateNameConflict (4m23s)

                                                
                                                
goroutine 2079 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 28 minutes]:
testing.(*T).Run(0xc000505500, {0x32034db?, 0xc000b95a88?}, 0x3c51e10)
	/usr/local/go/src/testing/testing.go:1859 +0x431
testing.runTests.func1(0xc000505500)
	/usr/local/go/src/testing/testing.go:2279 +0x37
testing.tRunner(0xc000505500, 0xc000b95bc8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
testing.runTests(0xc0005a0030, {0x5c616c0, 0x2c, 0x2c}, {0xffffffffffffffff?, 0xc0008aa0d0?, 0x5c89dc0?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc000c0e1e0)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000c0e1e0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xdb
main.main()
	_testmain.go:133 +0xa8

                                                
                                                
goroutine 114 [chan receive, 119 minutes]:
testing.(*T).Parallel(0xc000682540)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000682540)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestOffline(0xc000682540)
	/home/jenkins/workspace/Build_Cross/test/integration/aab_offline_test.go:32 +0x39
testing.tRunner(0xc000682540, 0x3c51e28)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 65 [chan receive, 110 minutes]:
testing.(*T).Parallel(0xc000583500)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000583500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc000583500)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc000583500, 0x3c51d20)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 132 [chan receive, 110 minutes]:
testing.(*T).Parallel(0xc000682a80)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000682a80)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc000682a80)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:146 +0x87
testing.tRunner(0xc000682a80, 0x3c51d68)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 131 [chan receive, 110 minutes]:
testing.(*T).Parallel(0xc000583dc0)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000583dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc000583dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:83 +0x87
testing.tRunner(0xc000583dc0, 0x3c51d70)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 64 [chan receive, 110 minutes]:
testing.(*T).Parallel(0xc000583180)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000583180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc000583180)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:36 +0x87
testing.tRunner(0xc000583180, 0x3c51d28)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 216 [IO wait, 102 minutes]:
internal/poll.runtime_pollWait(0x7a418a966dc8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000c20100?, 0x900000036?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000c20100)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc000c20100)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0008dc740)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc0008dc740)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc0017e4000, {0x3f9b790, 0xc0008dc740})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc0017e4000)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 213
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x129

                                                
                                                
goroutine 134 [chan receive, 110 minutes]:
testing.(*T).Parallel(0xc000682e00)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000682e00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestKVMDriverInstallOrUpdate(0xc000682e00)
	/home/jenkins/workspace/Build_Cross/test/integration/driver_install_or_update_test.go:48 +0x87
testing.tRunner(0xc000682e00, 0x3c51db8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 548 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 547
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 513 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc0002a8900, 0xc001782310)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 512
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 530 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3fc0920, {{0x3fb5948, 0xc0002483c0?}, 0xc0003b5700?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 529
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 547 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3fae230, 0xc0000844d0}, 0xc0016e3f50, 0xc0016e3f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3fae230, 0xc0000844d0}, 0x40?, 0xc0016e3f50, 0xc0016e3f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3fae230?, 0xc0000844d0?}, 0xc000103880?, 0x55d160?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000b77d0?, 0x5932a4?, 0xc0006bd580?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 531
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 531 [chan receive, 75 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc000c17740, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 529
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 745 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001620000, 0xc001782a10)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 405
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 546 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0006bdc50, 0x23)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001653ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3fc3d20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000c17740)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4c5c93?, 0xc001579aa0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3fae230?, 0xc0000844d0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3fae230, 0xc0000844d0}, 0xc001653f50, {0x3f65240, 0xc000c08240}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f65240?, 0xc000c08240?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000bde180, 0x3b9aca00, 0x0, 0x1, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 531
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 1842 [chan receive, 28 minutes]:
testing.(*T).Run(0xc001667180, {0x31f3138?, 0x1a3185c5000?}, 0xc000b9cba0)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode(0xc001667180)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:59 +0x367
testing.tRunner(0xc001667180, 0x3c51e10)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 555 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc00187ca80, 0xc0004f6e00)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 554
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 2044 [syscall, 4 minutes]:
syscall.Syscall6(0xf7, 0x3, 0xd, 0xc000b91a08, 0x4, 0xc0004c4360, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
internal/syscall/unix.Waitid(0xc000b91a36?, 0xc000b91b60?, 0x5930ab?, 0x7ffdc65b51ac?, 0x0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x39
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:106
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:251
os.(*Process).pidfdWait(0xc0005a0540?)
	/usr/local/go/src/os/pidfd_linux.go:105 +0x209
os.(*Process).wait(0xc000680008?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001620480)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc001620480)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc000102a80, 0xc001620480)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateNameConflict({0x3fadeb0, 0xc00030e3f0}, 0xc000102a80, {0xc0000134a0, 0x10})
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:464 +0x48d
k8s.io/minikube/test/integration.TestMultiNode.func1.1(0xc000102a80?)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:86 +0x6b
testing.tRunner(0xc000102a80, 0xc0008dc100)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1818
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 1818 [chan receive, 4 minutes]:
testing.(*T).Run(0xc00168f180, {0x3218126?, 0x40962a4?}, 0xc0008dc100)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode.func1(0xc00168f180)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:84 +0x17d
testing.tRunner(0xc00168f180, 0xc000b9cba0)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1842
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2098 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc001620480, 0xc000085110)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2044
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 2049 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7a418a9660a8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001578900?, 0xc00085b76a?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001578900, {0xc00085b76a, 0x896, 0x896})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006ae138, {0xc00085b76a?, 0x41ab46?, 0x7a41d1fb6102?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc000baa450, {0x3f63640, 0xc000c00020})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f637c0, 0xc000baa450}, {0x3f63640, 0xc000c00020}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0006ae138?, {0x3f637c0, 0xc000baa450})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0006ae138, {0x3f637c0, 0xc000baa450})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f637c0, 0xc000baa450}, {0x3f636c0, 0xc0006ae138}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0007f0540?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2044
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 2048 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7a418a966738, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001578840?, 0xc001576a91?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001578840, {0xc001576a91, 0x56f, 0x56f})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006ae108, {0xc001576a91?, 0x41ab46?, 0x55cfed?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc000baa420, {0x3f63640, 0xc000c00018})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f637c0, 0xc000baa420}, {0x3f63640, 0xc000c00018}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0006ae108?, {0x3f637c0, 0xc000baa420})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0006ae108, {0x3f637c0, 0xc000baa420})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f637c0, 0xc000baa420}, {0x3f636c0, 0xc0006ae108}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0008dc100?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2044
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                    

Test pass (92/166)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 15.07
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 11.86
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.4
21 TestBinaryMirror 0.81
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
39 TestErrorSpam/start 0.65
40 TestErrorSpam/status 0.87
41 TestErrorSpam/pause 1.32
42 TestErrorSpam/unpause 1.33
43 TestErrorSpam/stop 1.39
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
50 TestFunctional/serial/KubeContext 0.05
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.11
55 TestFunctional/serial/CacheCmd/cache/add_local 1.96
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
57 TestFunctional/serial/CacheCmd/cache/list 0.05
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
60 TestFunctional/serial/CacheCmd/cache/delete 0.11
65 TestFunctional/serial/LogsCmd 0.87
66 TestFunctional/serial/LogsFileCmd 0.9
69 TestFunctional/parallel/ConfigCmd 0.38
71 TestFunctional/parallel/DryRun 0.39
72 TestFunctional/parallel/InternationalLanguage 0.16
78 TestFunctional/parallel/AddonsCmd 0.14
81 TestFunctional/parallel/SSHCmd 0.66
82 TestFunctional/parallel/CpCmd 1.89
84 TestFunctional/parallel/FileSync 0.31
85 TestFunctional/parallel/CertSync 1.86
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
93 TestFunctional/parallel/License 0.43
94 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
95 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
96 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
97 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
98 TestFunctional/parallel/ImageCommands/ImageBuild 4.01
99 TestFunctional/parallel/ImageCommands/Setup 1.96
100 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
101 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
102 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
104 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
105 TestFunctional/parallel/ProfileCmd/profile_list 0.49
109 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
112 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
117 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
118 TestFunctional/parallel/MountCmd/specific-port 1.73
121 TestFunctional/parallel/MountCmd/VerifyCleanup 2.05
128 TestFunctional/parallel/Version/short 0.06
129 TestFunctional/parallel/Version/components 0.51
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/delete_echo-server_images 0.04
135 TestFunctional/delete_my-image_image 0.02
136 TestFunctional/delete_minikube_cached_images 0.02
164 TestJSONOutput/start/Audit 0
169 TestJSONOutput/pause/Command 0.47
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.45
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 1.22
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.2
188 TestKicCustomNetwork/create_custom_network 37.49
189 TestKicCustomNetwork/use_default_bridge_network 23.37
190 TestKicExistingNetwork 24.3
191 TestKicCustomSubnet 25.28
192 TestKicStaticIP 24.6
193 TestMainNoArgs 0.05
197 TestMountStart/serial/StartWithMountFirst 6.64
198 TestMountStart/serial/VerifyMountFirst 0.27
199 TestMountStart/serial/StartWithMountSecond 5.67
200 TestMountStart/serial/VerifyMountSecond 0.26
201 TestMountStart/serial/DeleteFirst 1.66
202 TestMountStart/serial/VerifyMountPostDelete 0.26
203 TestMountStart/serial/Stop 1.19
204 TestMountStart/serial/RestartStopped 7.78
205 TestMountStart/serial/VerifyMountPostStop 0.26
x
+
TestDownloadOnly/v1.28.0/json-events (15.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-256452 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-256452 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (15.070740604s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (15.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1006 13:56:09.939610  629719 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1006 13:56:09.939703  629719 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-256452
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-256452: exit status 85 (63.208689ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-256452 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-256452 │ jenkins │ v1.37.0 │ 06 Oct 25 13:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 13:55:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 13:55:54.911664  629732 out.go:360] Setting OutFile to fd 1 ...
	I1006 13:55:54.911920  629732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 13:55:54.911929  629732 out.go:374] Setting ErrFile to fd 2...
	I1006 13:55:54.911933  629732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 13:55:54.912115  629732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	W1006 13:55:54.912288  629732 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21701-626179/.minikube/config/config.json: open /home/jenkins/minikube-integration/21701-626179/.minikube/config/config.json: no such file or directory
	I1006 13:55:54.912759  629732 out.go:368] Setting JSON to true
	I1006 13:55:54.913721  629732 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":16691,"bootTime":1759742264,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 13:55:54.913824  629732 start.go:140] virtualization: kvm guest
	I1006 13:55:54.916044  629732 out.go:99] [download-only-256452] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1006 13:55:54.916211  629732 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball: no such file or directory
	I1006 13:55:54.916235  629732 notify.go:220] Checking for updates...
	I1006 13:55:54.917501  629732 out.go:171] MINIKUBE_LOCATION=21701
	I1006 13:55:54.919082  629732 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 13:55:54.920502  629732 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 13:55:54.921640  629732 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 13:55:54.922766  629732 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1006 13:55:54.924761  629732 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1006 13:55:54.925107  629732 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 13:55:54.948847  629732 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 13:55:54.948919  629732 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 13:55:55.113375  629732 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-06 13:55:55.102990345 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 13:55:55.113514  629732 docker.go:318] overlay module found
	I1006 13:55:55.115196  629732 out.go:99] Using the docker driver based on user configuration
	I1006 13:55:55.115244  629732 start.go:304] selected driver: docker
	I1006 13:55:55.115254  629732 start.go:924] validating driver "docker" against <nil>
	I1006 13:55:55.115455  629732 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 13:55:55.177152  629732 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-06 13:55:55.166801359 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 13:55:55.177411  629732 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 13:55:55.177940  629732 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1006 13:55:55.178075  629732 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 13:55:55.179816  629732 out.go:171] Using Docker driver with root privileges
	I1006 13:55:55.180921  629732 cni.go:84] Creating CNI manager for ""
	I1006 13:55:55.180980  629732 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 13:55:55.180993  629732 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 13:55:55.181047  629732 start.go:348] cluster config:
	{Name:download-only-256452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-256452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 13:55:55.182297  629732 out.go:99] Starting "download-only-256452" primary control-plane node in "download-only-256452" cluster
	I1006 13:55:55.182315  629732 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 13:55:55.183424  629732 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1006 13:55:55.183449  629732 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1006 13:55:55.183562  629732 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 13:55:55.200008  629732 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1006 13:55:55.200730  629732 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1006 13:55:55.200826  629732 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1006 13:55:55.292506  629732 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1006 13:55:55.292569  629732 cache.go:58] Caching tarball of preloaded images
	I1006 13:55:55.292765  629732 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1006 13:55:55.294732  629732 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1006 13:55:55.294751  629732 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1006 13:55:55.402745  629732 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1006 13:55:55.402859  629732 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1006 13:56:08.775830  629732 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1006 13:56:08.776294  629732 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/download-only-256452/config.json ...
	I1006 13:56:08.776343  629732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/download-only-256452/config.json: {Name:mk11cec0cb750724c3a0c3f7b5d1f54afe06096d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 13:56:08.776563  629732 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1006 13:56:08.776765  629732 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21701-626179/.minikube/cache/linux/amd64/v1.28.0/kubectl
	I1006 13:56:08.882556  629732 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	
	
	* The control-plane node download-only-256452 host does not exist
	  To start a cluster, run: "minikube start -p download-only-256452"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-256452
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-040731 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-040731 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.86387035s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1006 13:56:22.219361  629719 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1006 13:56:22.219417  629719 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-040731
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-040731: exit status 85 (61.406143ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-256452 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-256452 │ jenkins │ v1.37.0 │ 06 Oct 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │ 06 Oct 25 13:56 UTC │
	│ delete  │ -p download-only-256452                                                                                                                                                   │ download-only-256452 │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │ 06 Oct 25 13:56 UTC │
	│ start   │ -o=json --download-only -p download-only-040731 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-040731 │ jenkins │ v1.37.0 │ 06 Oct 25 13:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 13:56:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 13:56:10.398417  630137 out.go:360] Setting OutFile to fd 1 ...
	I1006 13:56:10.398552  630137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 13:56:10.398563  630137 out.go:374] Setting ErrFile to fd 2...
	I1006 13:56:10.398567  630137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 13:56:10.398807  630137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 13:56:10.399356  630137 out.go:368] Setting JSON to true
	I1006 13:56:10.400282  630137 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":16706,"bootTime":1759742264,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 13:56:10.400385  630137 start.go:140] virtualization: kvm guest
	I1006 13:56:10.401991  630137 out.go:99] [download-only-040731] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 13:56:10.402171  630137 notify.go:220] Checking for updates...
	I1006 13:56:10.403047  630137 out.go:171] MINIKUBE_LOCATION=21701
	I1006 13:56:10.404580  630137 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 13:56:10.405523  630137 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 13:56:10.406727  630137 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 13:56:10.407659  630137 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1006 13:56:10.409497  630137 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1006 13:56:10.409788  630137 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 13:56:10.433837  630137 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 13:56:10.433972  630137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 13:56:10.492859  630137 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-06 13:56:10.482632746 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 13:56:10.492973  630137 docker.go:318] overlay module found
	I1006 13:56:10.494176  630137 out.go:99] Using the docker driver based on user configuration
	I1006 13:56:10.494223  630137 start.go:304] selected driver: docker
	I1006 13:56:10.494233  630137 start.go:924] validating driver "docker" against <nil>
	I1006 13:56:10.494324  630137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 13:56:10.549047  630137 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-06 13:56:10.5391216 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 13:56:10.549233  630137 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 13:56:10.549705  630137 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1006 13:56:10.549844  630137 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 13:56:10.551367  630137 out.go:171] Using Docker driver with root privileges
	I1006 13:56:10.552678  630137 cni.go:84] Creating CNI manager for ""
	I1006 13:56:10.552737  630137 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 13:56:10.552749  630137 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 13:56:10.552813  630137 start.go:348] cluster config:
	{Name:download-only-040731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-040731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 13:56:10.553983  630137 out.go:99] Starting "download-only-040731" primary control-plane node in "download-only-040731" cluster
	I1006 13:56:10.554002  630137 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 13:56:10.554880  630137 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1006 13:56:10.554903  630137 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 13:56:10.554966  630137 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 13:56:10.573733  630137 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1006 13:56:10.573867  630137 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1006 13:56:10.573886  630137 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1006 13:56:10.573891  630137 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1006 13:56:10.573898  630137 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1006 13:56:10.656222  630137 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1006 13:56:10.656259  630137 cache.go:58] Caching tarball of preloaded images
	I1006 13:56:10.656457  630137 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 13:56:10.658296  630137 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1006 13:56:10.658318  630137 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1006 13:56:10.774575  630137 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1006 13:56:10.774640  630137 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21701-626179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-040731 host does not exist
	  To start a cluster, run: "minikube start -p download-only-040731"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-040731
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-650660 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-650660" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-650660
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I1006 13:56:23.312033  629719 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-501421 --alsologtostderr --binary-mirror http://127.0.0.1:36469 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-501421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-501421
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-834039
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-834039: exit status 85 (56.449521ms)

                                                
                                                
-- stdout --
	* Profile "addons-834039" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-834039"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-834039
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-834039: exit status 85 (56.769849ms)

                                                
                                                
-- stdout --
	* Profile "addons-834039" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-834039"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 start --dry-run
--- PASS: TestErrorSpam/start (0.65s)

                                                
                                    
x
+
TestErrorSpam/status (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 status: exit status 6 (289.194688ms)

                                                
                                                
-- stdout --
	nospam-500584
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:13:17.307740  641902 status.go:458] kubeconfig endpoint: get endpoint: "nospam-500584" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 status" failed: exit status 6
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 status: exit status 6 (290.190047ms)

                                                
                                                
-- stdout --
	nospam-500584
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:13:17.598327  642034 status.go:458] kubeconfig endpoint: get endpoint: "nospam-500584" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 status" failed: exit status 6
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 status
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 status: exit status 6 (287.439036ms)

                                                
                                                
-- stdout --
	nospam-500584
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 14:13:17.885963  642145 status.go:458] kubeconfig endpoint: get endpoint: "nospam-500584" does not appear in /home/jenkins/minikube-integration/21701-626179/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 status" failed: exit status 6
--- PASS: TestErrorSpam/status (0.87s)

                                                
                                    
x
+
TestErrorSpam/pause (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 pause
--- PASS: TestErrorSpam/pause (1.32s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 unpause
--- PASS: TestErrorSpam/unpause (1.33s)

                                                
                                    
x
+
TestErrorSpam/stop (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 stop: (1.206546871s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-500584 --log_dir /tmp/nospam-500584 stop
--- PASS: TestErrorSpam/stop (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21701-626179/.minikube/files/etc/test/nested/copy/629719/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-135520 cache add registry.k8s.io/pause:3.1: (1.058184287s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-135520 cache add registry.k8s.io/pause:3.3: (1.125545764s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-135520 /tmp/TestFunctionalserialCacheCmdcacheadd_local1843678318/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 cache add minikube-local-cache-test:functional-135520
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-135520 cache add minikube-local-cache-test:functional-135520: (1.634308366s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 cache delete minikube-local-cache-test:functional-135520
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-135520
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.638882ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 logs
--- PASS: TestFunctional/serial/LogsCmd (0.87s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 logs --file /tmp/TestFunctionalserialLogsFileCmd806435457/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 config get cpus: exit status 14 (72.370077ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 config get cpus: exit status 14 (64.335067ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-135520 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-135520 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (164.335748ms)

                                                
                                                
-- stdout --
	* [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:40:40.067474  678223 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:40:40.068010  678223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:40:40.068031  678223 out.go:374] Setting ErrFile to fd 2...
	I1006 14:40:40.068038  678223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:40:40.068563  678223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:40:40.069570  678223 out.go:368] Setting JSON to false
	I1006 14:40:40.070680  678223 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19376,"bootTime":1759742264,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:40:40.070777  678223 start.go:140] virtualization: kvm guest
	I1006 14:40:40.072754  678223 out.go:179] * [functional-135520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1006 14:40:40.074004  678223 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:40:40.074011  678223 notify.go:220] Checking for updates...
	I1006 14:40:40.075833  678223 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:40:40.076875  678223 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:40:40.077927  678223 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:40:40.079000  678223 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:40:40.080086  678223 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:40:40.081628  678223 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:40:40.082141  678223 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:40:40.107916  678223 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:40:40.108005  678223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:40:40.168757  678223 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:40:40.158197501 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:40:40.168901  678223 docker.go:318] overlay module found
	I1006 14:40:40.170299  678223 out.go:179] * Using the docker driver based on existing profile
	I1006 14:40:40.171473  678223 start.go:304] selected driver: docker
	I1006 14:40:40.171495  678223 start.go:924] validating driver "docker" against &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:40:40.171612  678223 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:40:40.175599  678223 out.go:203] 
	W1006 14:40:40.176700  678223 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1006 14:40:40.177637  678223 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-135520 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-135520 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-135520 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (163.708187ms)

                                                
                                                
-- stdout --
	* [functional-135520] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:40:39.900351  678089 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:40:39.900593  678089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:40:39.900601  678089 out.go:374] Setting ErrFile to fd 2...
	I1006 14:40:39.900605  678089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:40:39.900942  678089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
	I1006 14:40:39.901430  678089 out.go:368] Setting JSON to false
	I1006 14:40:39.902313  678089 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19376,"bootTime":1759742264,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 14:40:39.902413  678089 start.go:140] virtualization: kvm guest
	I1006 14:40:39.904366  678089 out.go:179] * [functional-135520] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1006 14:40:39.905410  678089 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:40:39.905424  678089 notify.go:220] Checking for updates...
	I1006 14:40:39.907183  678089 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:40:39.908154  678089 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig
	I1006 14:40:39.909065  678089 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube
	I1006 14:40:39.909971  678089 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 14:40:39.910852  678089 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:40:39.912043  678089 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 14:40:39.912596  678089 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:40:39.938160  678089 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1006 14:40:39.938329  678089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:40:40.004046  678089 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-06 14:40:39.992350294 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1006 14:40:40.004218  678089 docker.go:318] overlay module found
	I1006 14:40:40.006358  678089 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1006 14:40:40.007533  678089 start.go:304] selected driver: docker
	I1006 14:40:40.007550  678089 start.go:924] validating driver "docker" against &{Name:functional-135520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-135520 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:40:40.007629  678089 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:40:40.009651  678089 out.go:203] 
	W1006 14:40:40.011187  678089 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1006 14:40:40.012231  678089 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh -n functional-135520 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 cp functional-135520:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2529337736/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh -n functional-135520 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh -n functional-135520 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/629719/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "sudo cat /etc/test/nested/copy/629719/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/629719.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "sudo cat /etc/ssl/certs/629719.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/629719.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "sudo cat /usr/share/ca-certificates/629719.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/6297192.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "sudo cat /etc/ssl/certs/6297192.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/6297192.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "sudo cat /usr/share/ca-certificates/6297192.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 ssh "sudo systemctl is-active docker": exit status 1 (311.572378ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 ssh "sudo systemctl is-active containerd": exit status 1 (306.2065ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-135520 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-135520 image ls --format short --alsologtostderr:
I1006 14:40:42.521233  679803 out.go:360] Setting OutFile to fd 1 ...
I1006 14:40:42.521504  679803 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:42.521515  679803 out.go:374] Setting ErrFile to fd 2...
I1006 14:40:42.521519  679803 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:42.521728  679803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
I1006 14:40:42.522406  679803 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:40:42.522496  679803 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:40:42.522855  679803 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
I1006 14:40:42.545062  679803 ssh_runner.go:195] Run: systemctl --version
I1006 14:40:42.545141  679803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
I1006 14:40:42.565922  679803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
I1006 14:40:42.673328  679803 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-135520 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-135520 image ls --format table --alsologtostderr:
I1006 14:40:42.751270  679956 out.go:360] Setting OutFile to fd 1 ...
I1006 14:40:42.751385  679956 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:42.751393  679956 out.go:374] Setting ErrFile to fd 2...
I1006 14:40:42.751398  679956 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:42.751606  679956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
I1006 14:40:42.752229  679956 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:40:42.752336  679956 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:40:42.752720  679956 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
I1006 14:40:42.771476  679956 ssh_runner.go:195] Run: systemctl --version
I1006 14:40:42.771536  679956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
I1006 14:40:42.791597  679956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
I1006 14:40:42.896836  679956 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-135520 image ls --format json --alsologtostderr:
[{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d
0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/
k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha
256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-135520 image ls --format json --alsologtostderr:
I1006 14:40:42.632085  679903 out.go:360] Setting OutFile to fd 1 ...
I1006 14:40:42.632354  679903 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:42.632365  679903 out.go:374] Setting ErrFile to fd 2...
I1006 14:40:42.632369  679903 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:42.632569  679903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
I1006 14:40:42.633170  679903 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:40:42.633291  679903 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:40:42.633687  679903 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
I1006 14:40:42.651299  679903 ssh_runner.go:195] Run: systemctl --version
I1006 14:40:42.651344  679903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
I1006 14:40:42.668627  679903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
I1006 14:40:42.770931  679903 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-135520 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-135520 image ls --format yaml --alsologtostderr:
I1006 14:40:42.852775  680012 out.go:360] Setting OutFile to fd 1 ...
I1006 14:40:42.852896  680012 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:42.852907  680012 out.go:374] Setting ErrFile to fd 2...
I1006 14:40:42.852914  680012 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:42.853118  680012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
I1006 14:40:42.853747  680012 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:40:42.853868  680012 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:40:42.854306  680012 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
I1006 14:40:42.872002  680012 ssh_runner.go:195] Run: systemctl --version
I1006 14:40:42.872074  680012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
I1006 14:40:42.889756  680012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
I1006 14:40:42.992324  680012 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 ssh pgrep buildkitd: exit status 1 (274.231759ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr: (3.511872931s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7df2436e7a4
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-135520
--> cfc64f4bb10
Successfully tagged localhost/my-image:functional-135520
cfc64f4bb10d1ea4610643debfcaf6c383ba618cf7a55130edc2ec48103fc077
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-135520 image build -t localhost/my-image:functional-135520 testdata/build --alsologtostderr:
I1006 14:40:43.254687  680271 out.go:360] Setting OutFile to fd 1 ...
I1006 14:40:43.254821  680271 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:43.254831  680271 out.go:374] Setting ErrFile to fd 2...
I1006 14:40:43.254836  680271 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:40:43.255017  680271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-626179/.minikube/bin
I1006 14:40:43.255873  680271 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:40:43.256799  680271 config.go:182] Loaded profile config "functional-135520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 14:40:43.257396  680271 cli_runner.go:164] Run: docker container inspect functional-135520 --format={{.State.Status}}
I1006 14:40:43.275247  680271 ssh_runner.go:195] Run: systemctl --version
I1006 14:40:43.275308  680271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-135520
I1006 14:40:43.293004  680271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/jenkins/minikube-integration/21701-626179/.minikube/machines/functional-135520/id_rsa Username:docker}
I1006 14:40:43.393450  680271 build_images.go:161] Building image from path: /tmp/build.2951574939.tar
I1006 14:40:43.393526  680271 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1006 14:40:43.402301  680271 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2951574939.tar
I1006 14:40:43.406698  680271 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2951574939.tar: stat -c "%s %y" /var/lib/minikube/build/build.2951574939.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2951574939.tar': No such file or directory
I1006 14:40:43.406726  680271 ssh_runner.go:362] scp /tmp/build.2951574939.tar --> /var/lib/minikube/build/build.2951574939.tar (3072 bytes)
I1006 14:40:43.425798  680271 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2951574939
I1006 14:40:43.434132  680271 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2951574939 -xf /var/lib/minikube/build/build.2951574939.tar
I1006 14:40:43.443564  680271 crio.go:315] Building image: /var/lib/minikube/build/build.2951574939
I1006 14:40:43.443630  680271 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-135520 /var/lib/minikube/build/build.2951574939 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1006 14:40:46.694088  680271 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-135520 /var/lib/minikube/build/build.2951574939 --cgroup-manager=cgroupfs: (3.250431406s)
I1006 14:40:46.694153  680271 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2951574939
I1006 14:40:46.702358  680271 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2951574939.tar
I1006 14:40:46.709979  680271 build_images.go:217] Built localhost/my-image:functional-135520 from /tmp/build.2951574939.tar
I1006 14:40:46.710023  680271 build_images.go:133] succeeded building to: functional-135520
I1006 14:40:46.710030  680271 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.930353414s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-135520
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "415.035363ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "70.573222ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-135520 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "342.616934ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "53.150968ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image rm kicbase/echo-server:functional-135520 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-135520 /tmp/TestFunctionalparallelMountCmdspecific-port2551281271/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.243163ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1006 14:40:33.893438  629719 retry.go:31] will retry after 363.536826ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-135520 /tmp/TestFunctionalparallelMountCmdspecific-port2551281271/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 ssh "sudo umount -f /mount-9p": exit status 1 (282.051493ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-135520 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-135520 /tmp/TestFunctionalparallelMountCmdspecific-port2551281271/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-135520 ssh "findmnt -T" /mount1: exit status 1 (384.273793ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1006 14:40:35.705591  629719 retry.go:31] will retry after 708.880137ms: exit status 1
I1006 14:40:35.952884  629719 retry.go:31] will retry after 6.219923767s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-135520 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-135520 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1055249216/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-135520 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-135520 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-135520
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-135520
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-135520
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-616465 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-616465 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.22s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-616465 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-616465 --output=json --user=testUser: (1.223035269s)
--- PASS: TestJSONOutput/stop/Command (1.22s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-057185 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-057185 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (60.000695ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b5989d34-b0c0-40ad-8fbc-4c26a7069458","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-057185] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a21f455-45c9-4a6b-9918-1c43b4cb7f8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21701"}}
	{"specversion":"1.0","id":"45b8550e-6700-4410-8c56-cf82a487d5ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"df97fa8f-c90d-4e2f-ba3b-c5c361846818","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21701-626179/kubeconfig"}}
	{"specversion":"1.0","id":"815b6276-fd08-405d-9857-e5892765b9c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-626179/.minikube"}}
	{"specversion":"1.0","id":"021230bf-322d-40ee-8227-297ee4daa754","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b26d7c9e-1749-4cf3-be1e-253110a93df8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f60fc5ca-cf8c-46b8-b40e-06aba604ffc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-057185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-057185
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.49s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-627402 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-627402 --network=: (35.36504476s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-627402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-627402
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-627402: (2.105287377s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.49s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-020891 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-020891 --network=bridge: (21.409524345s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-020891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-020891
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-020891: (1.942728371s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.37s)

                                                
                                    
x
+
TestKicExistingNetwork (24.3s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1006 15:17:30.348886  629719 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1006 15:17:30.366332  629719 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1006 15:17:30.366449  629719 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1006 15:17:30.366475  629719 cli_runner.go:164] Run: docker network inspect existing-network
W1006 15:17:30.382688  629719 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1006 15:17:30.382719  629719 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1006 15:17:30.382738  629719 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1006 15:17:30.382938  629719 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1006 15:17:30.399880  629719 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003e83e0}
I1006 15:17:30.399930  629719 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1006 15:17:30.399983  629719 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1006 15:17:30.455688  629719 network_create.go:108] docker network existing-network 192.168.49.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-726483 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-726483 --network=existing-network: (22.204312777s)
helpers_test.go:175: Cleaning up "existing-network-726483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-726483
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-726483: (1.958726576s)
I1006 15:17:54.635775  629719 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.30s)

                                                
                                    
x
+
TestKicCustomSubnet (25.28s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-837757 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-837757 --subnet=192.168.60.0/24: (23.147830175s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-837757 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-837757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-837757
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-837757: (2.112110987s)
--- PASS: TestKicCustomSubnet (25.28s)

                                                
                                    
x
+
TestKicStaticIP (24.6s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-097659 --static-ip=192.168.200.200
E1006 15:18:33.594813  629719 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-626179/.minikube/profiles/functional-135520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-097659 --static-ip=192.168.200.200: (22.373822443s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-097659 ip
helpers_test.go:175: Cleaning up "static-ip-097659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-097659
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-097659: (2.085669508s)
--- PASS: TestKicStaticIP (24.60s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-688873 --memory=3072 --mount-string /tmp/TestMountStartserial2897151940/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-688873 --memory=3072 --mount-string /tmp/TestMountStartserial2897151940/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.643308425s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-688873 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-706528 --memory=3072 --mount-string /tmp/TestMountStartserial2897151940/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-706528 --memory=3072 --mount-string /tmp/TestMountStartserial2897151940/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.674161425s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-706528 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-688873 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-688873 --alsologtostderr -v=5: (1.655653889s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-706528 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-706528
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-706528: (1.192787484s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.78s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-706528
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-706528: (6.783964594s)
--- PASS: TestMountStart/serial/RestartStopped (7.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-706528 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    

Test skip (18/166)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
Copied to clipboard